Publikation:

High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2020

Autor:innen

Chen, Tianbai
Anklam, Carsten

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

2020 IEEE Intelligent Vehicles Symposium (IV). Piscataway, NJ: IEEE, 2020, pp. 1615-1622. ISSN 1931-0587. eISSN 2642-7214. ISBN 978-1-72816-673-5. Available under: doi: 10.1109/IV47402.2020.9304655

Zusammenfassung

Fusing the raw data from different automotive sensors for real-world environment perception is still challenging due to their different representations and data formats. In this work, we propose a novel method termed High Dimensional Frustum PointNet for 3D object detection in the context of autonomous driving. Motivated by the goals data diversity and lossless processing of the data, our deep learning approach directly and jointly uses the raw data from the camera, LiDAR, and radar. In more detail, given 2D region proposals and classification from camera images, a high dimensional convolution operator captures local features from a point cloud enhanced with color and temporal information. Radars are used as adaptive plug-in sensors to refine object detection performance. As shown by an extensive evaluation on the nuScenes 3D detection benchmark, our network outperforms most of the previous methods.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

2020 IEEE Intelligent Vehicles Symposium (IV), 19. Okt. 2020 - 13. Nov. 2020, Las Vegas, NV, USA
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690WANG, Leichen, Tianbai CHEN, Carsten ANKLAM, Bastian GOLDLÜCKE, 2020. High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar. 2020 IEEE Intelligent Vehicles Symposium (IV). Las Vegas, NV, USA, 19. Okt. 2020 - 13. Nov. 2020. In: 2020 IEEE Intelligent Vehicles Symposium (IV). Piscataway, NJ: IEEE, 2020, pp. 1615-1622. ISSN 1931-0587. eISSN 2642-7214. ISBN 978-1-72816-673-5. Available under: doi: 10.1109/IV47402.2020.9304655
BibTex
@inproceedings{Wang2020Dimen-54152,
  year={2020},
  doi={10.1109/IV47402.2020.9304655},
  title={High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar},
  isbn={978-1-72816-673-5},
  issn={1931-0587},
  publisher={IEEE},
  address={Piscataway, NJ},
  booktitle={2020 IEEE Intelligent Vehicles Symposium (IV)},
  pages={1615--1622},
  author={Wang, Leichen and Chen, Tianbai and Anklam, Carsten and Goldlücke, Bastian}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54152">
    <dc:creator>Wang, Leichen</dc:creator>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54152"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Chen, Tianbai</dc:creator>
    <dcterms:abstract xml:lang="eng">Fusing the raw data from different automotive sensors for real-world environment perception is still challenging due to their different representations and data formats. In this work, we propose a novel method termed High Dimensional Frustum PointNet for 3D object detection in the context of autonomous driving. Motivated by the goals data diversity and lossless processing of the data, our deep learning approach directly and jointly uses the raw data from the camera, LiDAR, and radar. In more detail, given 2D region proposals and classification from camera images, a high dimensional convolution operator captures local features from a point cloud enhanced with color and temporal information. Radars are used as adaptive plug-in sensors to refine object detection performance. As shown by an extensive evaluation on the nuScenes 3D detection benchmark, our network outperforms most of the previous methods.</dcterms:abstract>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:issued>2020</dcterms:issued>
    <dc:contributor>Wang, Leichen</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-30T11:30:52Z</dcterms:available>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-30T11:30:52Z</dc:date>
    <dcterms:title>High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar</dcterms:title>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <dc:contributor>Chen, Tianbai</dc:contributor>
    <dc:creator>Anklam, Carsten</dc:creator>
    <dc:contributor>Anklam, Carsten</dc:contributor>
    <dc:language>eng</dc:language>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen