Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression

dc.contributor.authorAzmi, Behzad
dc.contributor.authorKalise, Dante
dc.contributor.authorKunisch, Karl
dc.date.accessioned2022-02-10T14:38:56Z
dc.date.available2022-02-10T14:38:56Z
dc.date.issued2021eng
dc.description.abstractA sparse regression approach for the computation of high-dimensional optimal feedback laws arising in deterministic nonlinear control is proposed. The approach exploits the control-theoretical link between Hamilton-Jacobi-Bellman PDEs characterizing the value function of the optimal control problems, and rst-order optimality conditions via Pontryagin's Maximum Principle. The latter is used as a representation formula to recover the value function and its gradient at arbitrary points in the space-time domain through the solution of a two-point boundary value problem. After generating a dataset consisting of di erent state-value pairs, a hyperbolic cross polynomial model for the value function is tted using a LASSO regression. An extended set of low and high-dimensional numerical tests in nonlinear optimal control reveal that enriching the dataset with gradient information reduces the number of training samples, and that the sparse polynomial regression consistently yields a feedback law of lower complexity.eng
dc.description.versionpublishedeng
dc.identifier.ppn1789243718
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/56521
dc.language.isoengeng
dc.rightsterms-of-use
dc.rights.urihttps://rightsstatements.org/page/InC/1.0/
dc.subjectOptimal Feedback Control, Optimality Conditions, Hamilton-Jacobi-Bellman PDE, Polynomial Approximation, Sparse Optimizationeng
dc.subject.ddc004eng
dc.titleOptimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regressioneng
dc.typeJOURNAL_ARTICLEeng
dspace.entity.typePublication
kops.citation.bibtex
@article{Azmi2021Optim-56521,
  year={2021},
  title={Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression},
  url={https://jmlr.org/papers/v22/20-755.html},
  volume={22},
  issn={1532-4435},
  journal={Journal of Machine Learning Research (JMLR)},
  author={Azmi, Behzad and Kalise, Dante and Kunisch, Karl},
  note={Article Number: 48}
}
kops.citation.iso690AZMI, Behzad, Dante KALISE, Karl KUNISCH, 2021. Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression. In: Journal of Machine Learning Research (JMLR). Microtome Publishing. 2021, 22, 48. ISSN 1532-4435. eISSN 1533-7928deu
kops.citation.iso690AZMI, Behzad, Dante KALISE, Karl KUNISCH, 2021. Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression. In: Journal of Machine Learning Research (JMLR). Microtome Publishing. 2021, 22, 48. ISSN 1532-4435. eISSN 1533-7928eng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/56521">
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/56521"/>
    <dc:language>eng</dc:language>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-02-10T14:38:56Z</dc:date>
    <dc:rights>terms-of-use</dc:rights>
    <dc:contributor>Azmi, Behzad</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/39"/>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/56521/1/Azmi_2-ziodqjb4nyrb7.pdf"/>
    <dc:creator>Azmi, Behzad</dc:creator>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:abstract xml:lang="eng">A sparse regression approach for the computation of high-dimensional optimal feedback laws arising in deterministic nonlinear control is proposed. The approach exploits the control-theoretical link between Hamilton-Jacobi-Bellman PDEs characterizing the value function of the optimal control problems, and  rst-order optimality conditions via Pontryagin's Maximum Principle. The latter is used as a representation formula to recover the value function and its gradient at arbitrary points in the space-time domain through the solution of a two-point boundary value problem. After generating a dataset consisting of di erent state-value pairs, a hyperbolic cross polynomial model for the value function is  tted using a LASSO regression. An extended set of low and high-dimensional numerical tests in nonlinear optimal control reveal that enriching the dataset with gradient information reduces the number of training samples, and that the sparse polynomial regression consistently yields a feedback law of lower complexity.</dcterms:abstract>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:issued>2021</dcterms:issued>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-02-10T14:38:56Z</dcterms:available>
    <dc:creator>Kunisch, Karl</dc:creator>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/56521/1/Azmi_2-ziodqjb4nyrb7.pdf"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/39"/>
    <dcterms:title>Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression</dcterms:title>
    <dc:contributor>Kunisch, Karl</dc:contributor>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dc:creator>Kalise, Dante</dc:creator>
    <dc:contributor>Kalise, Dante</dc:contributor>
  </rdf:Description>
</rdf:RDF>
kops.description.openAccessopenaccessgoldeng
kops.flag.isPeerReviewedtrueeng
kops.flag.knbibliographyfalse
kops.identifier.nbnurn:nbn:de:bsz:352-2-ziodqjb4nyrb7
kops.sourcefieldJournal of Machine Learning Research (JMLR). Microtome Publishing. 2021, <b>22</b>, 48. ISSN 1532-4435. eISSN 1533-7928deu
kops.sourcefield.plainJournal of Machine Learning Research (JMLR). Microtome Publishing. 2021, 22, 48. ISSN 1532-4435. eISSN 1533-7928deu
kops.sourcefield.plainJournal of Machine Learning Research (JMLR). Microtome Publishing. 2021, 22, 48. ISSN 1532-4435. eISSN 1533-7928eng
kops.urlhttps://jmlr.org/papers/v22/20-755.htmleng
kops.urlDate2022-02-10eng
relation.isAuthorOfPublicationaf05d93d-70bb-4270-bcdd-d16b11682843
relation.isAuthorOfPublication.latestForDiscoveryaf05d93d-70bb-4270-bcdd-d16b11682843
source.bibliographicInfo.articleNumber48eng
source.bibliographicInfo.volume22eng
source.identifier.eissn1533-7928eng
source.identifier.issn1532-4435eng
source.periodicalTitleJournal of Machine Learning Research (JMLR)eng
source.publisherMicrotome Publishingeng

Dateien

Originalbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
Azmi_2-ziodqjb4nyrb7.pdf
Größe:
2.19 MB
Format:
Adobe Portable Document Format
Beschreibung:
Azmi_2-ziodqjb4nyrb7.pdf
Azmi_2-ziodqjb4nyrb7.pdfGröße: 2.19 MBDownloads: 252

Lizenzbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
license.txt
Größe:
3.96 KB
Format:
Item-specific license agreed upon to submission
Beschreibung:
license.txt
license.txtGröße: 3.96 KBDownloads: 0