## Development of swarm behavior in artificial learning agents that adapt to different foraging environments

2020
##### Authors
López-Incera, Andrea
Ried, Katja
Journal article
Published
##### Published in
PloS one ; 15 (2020), 12. - e0243628. - Public Library of Science (PLoS). - eISSN 1932-6203
##### Abstract
Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics. In this work, we apply Projective Simulation to model each individual as an artificial learning agent that interacts with its neighbors and surroundings in order to make decisions and learn from them. Within a reinforcement learning framework, we discuss one-dimensional learning scenarios where agents need to get to food resources to be rewarded. We observe how different types of collective motion emerge depending on the distance the agents need to travel to reach the resources. For instance, strongly aligned swarms emerge when the food source is placed far away from the region where agents are situated initially. In addition, we study the properties of the individual trajectories that occur within the different types of emergent collective dynamics. Agents trained to find distant resources exhibit individual trajectories that are in most cases best fit by composite correlated random walks with features that resemble Lévy walks. This composite motion emerges from the collective behavior developed under the specific foraging selection pressures. On the other hand, agents trained to reach nearby resources predominantly exhibit Brownian trajectories.
100 Philosophy
##### Cite This
ISO 690LÓPEZ-INCERA, Andrea, Katja RIED, Thomas MÜLLER, Hans J. BRIEGEL, 2020. Development of swarm behavior in artificial learning agents that adapt to different foraging environments. In: PloS one. Public Library of Science (PLoS). 15(12), e0243628. eISSN 1932-6203. Available under: doi: 10.1371/journal.pone.0243628
BibTex
@article{LopezIncera2020Devel-52532,
year={2020},
doi={10.1371/journal.pone.0243628},
title={Development of swarm behavior in artificial learning agents that adapt to different foraging environments},
number={12},
volume={15},
journal={PloS one},
author={López-Incera, Andrea and Ried, Katja and Müller, Thomas and Briegel, Hans J.},
note={Article Number: e0243628}
}

RDF
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:bibo="http://purl.org/ontology/bibo/"
xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:void="http://rdfs.org/ns/void#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
<dc:contributor>López-Incera, Andrea</dc:contributor>
<dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/40"/>
<dc:language>eng</dc:language>
<dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/52532/1/Lopez-Incera_2-6bey9ihbl6cs7.pdf"/>
<dcterms:title>Development of swarm behavior in artificial learning agents that adapt to different foraging environments</dcterms:title>
<dc:creator>Ried, Katja</dc:creator>
<dc:creator>Briegel, Hans J.</dc:creator>
<bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/52532"/>
<dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-01-21T13:28:49Z</dcterms:available>
<foaf:homepage rdf:resource="http://localhost:8080/"/>
<dc:creator>Müller, Thomas</dc:creator>
<dc:contributor>Müller, Thomas</dc:contributor>
<dc:contributor>Ried, Katja</dc:contributor>
<dcterms:abstract xml:lang="eng">Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics. In this work, we apply Projective Simulation to model each individual as an artificial learning agent that interacts with its neighbors and surroundings in order to make decisions and learn from them. Within a reinforcement learning framework, we discuss one-dimensional learning scenarios where agents need to get to food resources to be rewarded. We observe how different types of collective motion emerge depending on the distance the agents need to travel to reach the resources. For instance, strongly aligned swarms emerge when the food source is placed far away from the region where agents are situated initially. In addition, we study the properties of the individual trajectories that occur within the different types of emergent collective dynamics. Agents trained to find distant resources exhibit individual trajectories that are in most cases best fit by composite correlated random walks with features that resemble Lévy walks. This composite motion emerges from the collective behavior developed under the specific foraging selection pressures. On the other hand, agents trained to reach nearby resources predominantly exhibit Brownian trajectories.</dcterms:abstract>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-01-21T13:28:49Z</dc:date>
<dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/40"/>
<dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/52532/1/Lopez-Incera_2-6bey9ihbl6cs7.pdf"/>
<dc:creator>López-Incera, Andrea</dc:creator>
<dcterms:issued>2020</dcterms:issued>
<dc:contributor>Briegel, Hans J.</dc:contributor>
<void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
</rdf:Description>
</rdf:RDF>

Yes
Yes