Publikation:

Sparse-PointNet : See Further in Autonomous Vehicles

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2021

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

IEEE Robotics and Automation Letters. IEEE. 2021, 6(4), pp. 7049-7056. eISSN 2377-3766. Available under: doi: 10.1109/LRA.2021.3096253

Zusammenfassung

Since the density of LiDAR points reduces significantly with increasing distance, popular 3D detectors tend to learn spatial features from dense points and ignore very sparse points in the far range. As a result, their performance degrades dramatically beyond 50 meters. Motivated by the above problem, we introduce a novel approach to jointly detect objects from multimodal sensor data, with two main contributions. First, we leverage PointPainting [15] to develop a new key point sampling algorithm, which encodes the complex scene into a few representative points with approximately similar point density. Further, we fuse a dynamic continuous occupancy heatmap to refine the final proposal. In addition, we feed radar points into the network, which allows it to take into account additional cues. We evaluate our method on the widely used nuScenes dataset. Our method outperforms all state-of-the-art methods in the far range by a large margin and also achieves comparable performance in the near range.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690WANG, Leichen, Bastian GOLDLÜCKE, 2021. Sparse-PointNet : See Further in Autonomous Vehicles. In: IEEE Robotics and Automation Letters. IEEE. 2021, 6(4), pp. 7049-7056. eISSN 2377-3766. Available under: doi: 10.1109/LRA.2021.3096253
BibTex
@article{Wang2021Spars-54548,
  year={2021},
  doi={10.1109/LRA.2021.3096253},
  title={Sparse-PointNet : See Further in Autonomous Vehicles},
  number={4},
  volume={6},
  journal={IEEE Robotics and Automation Letters},
  pages={7049--7056},
  author={Wang, Leichen and Goldlücke, Bastian}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54548">
    <dc:contributor>Wang, Leichen</dc:contributor>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-08-11T09:09:55Z</dcterms:available>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54548"/>
    <dc:creator>Wang, Leichen</dc:creator>
    <dcterms:title>Sparse-PointNet : See Further in Autonomous Vehicles</dcterms:title>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2021</dcterms:issued>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-08-11T09:09:55Z</dc:date>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:abstract xml:lang="eng">Since the density of LiDAR points reduces significantly with increasing distance, popular 3D detectors tend to learn spatial features from dense points and ignore very sparse points in the far range. As a result, their performance degrades dramatically beyond 50 meters. Motivated by the above problem, we introduce a novel approach to jointly detect objects from multimodal sensor data, with two main contributions. First, we leverage PointPainting [15] to develop a new key point sampling algorithm, which encodes the complex scene into a few representative points with approximately similar point density. Further, we fuse a dynamic continuous occupancy heatmap to refine the final proposal. In addition, we feed radar points into the network, which allows it to take into account additional cues. We evaluate our method on the widely used nuScenes dataset. Our method outperforms all state-of-the-art methods in the far range by a large margin and also achieves comparable performance in the near range.</dcterms:abstract>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen