Wang, Leichen

Lade...
Profilbild
E-Mail-Adresse
ORCID
Geburtsdatum
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Wang
Vorname
Leichen
Name

Suchergebnisse Publikationen

Gerade angezeigt 1 - 4 von 4
Vorschaubild nicht verfügbar
Veröffentlichung

AI-Based sensor fusion for perception in autonomous vehicles

2023, Wang, Leichen

Vorschaubild nicht verfügbar
Veröffentlichung

High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar

2020, Wang, Leichen, Chen, Tianbai, Anklam, Carsten, Goldlücke, Bastian

Fusing the raw data from different automotive sensors for real-world environment perception is still challenging due to their different representations and data formats. In this work, we propose a novel method termed High Dimensional Frustum PointNet for 3D object detection in the context of autonomous driving. Motivated by the goals data diversity and lossless processing of the data, our deep learning approach directly and jointly uses the raw data from the camera, LiDAR, and radar. In more detail, given 2D region proposals and classification from camera images, a high dimensional convolution operator captures local features from a point cloud enhanced with color and temporal information. Radars are used as adaptive plug-in sensors to refine object detection performance. As shown by an extensive evaluation on the nuScenes 3D detection benchmark, our network outperforms most of the previous methods.

Vorschaubild nicht verfügbar
Veröffentlichung

Sparse-PointNet : See Further in Autonomous Vehicles

2021, Wang, Leichen, Goldlücke, Bastian

Since the density of LiDAR points reduces significantly with increasing distance, popular 3D detectors tend to learn spatial features from dense points and ignore very sparse points in the far range. As a result, their performance degrades dramatically beyond 50 meters. Motivated by the above problem, we introduce a novel approach to jointly detect objects from multimodal sensor data, with two main contributions. First, we leverage PointPainting [15] to develop a new key point sampling algorithm, which encodes the complex scene into a few representative points with approximately similar point density. Further, we fuse a dynamic continuous occupancy heatmap to refine the final proposal. In addition, we feed radar points into the network, which allows it to take into account additional cues. We evaluate our method on the widely used nuScenes dataset. Our method outperforms all state-of-the-art methods in the far range by a large margin and also achieves comparable performance in the near range.

Vorschaubild nicht verfügbar
Veröffentlichung

Radar Ghost Target Detection via Multimodal Transformers

2021, Wang, Leichen, Giebenhain, Simon, Anklam, Carsten, Goldlücke, Bastian

Ghost targets caused by inter-reflections are by design unavoidable in radar measurements, and it is challenging to distinguish these artifact detections from real ones. In this letter, we propose a novel approach to detect radar ghost targets by using LiDAR data as a reference. For this, we adopt a multimodal transformer network to learn interactions between points. We employ self-attention to exchange information between radar points, and local crossmodal attention to infuse information from surrounding LiDAR points. The key idea is that a ghost target should have higher semantic affinity with the reflected real target than the other ones. Extensive experiments on nuScenes [1] show that our method outperforms the baseline method on radar ghost target detection by a large margin.