[1]Qi, C. R., & Guibas, L. J. (2018). 12-Qi_Frustum_PointNets_for_CVPR_2018_paper.pdf. Frustum PointNets for 3D Object Detection from RGB-D Data.
[2]John, V., & Mita, S. (2019). RVNet: Deep Sensor Fusion of Monocular Camera and Radar for Image-Based Obstacle Detection in Challenging Environments. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11854 LNCS(September), 351–364. https://doi.org/10.1007/978-3-030-34879-3_27
[3]Nobis, F., Geisslinger, M., Weber, M., Betz, J., & Lienkamp, M. (2019). A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection. 2019 Symposium on Sensor Data Fusion: Trends, Solutions, Applications, SDF 2019, 1–7. https://doi.org/10.1109/SDF.2019.8916629
[4]Nabati, R., & Qi, H. (n.d.). CenterFusion?: Center-based Radar and Camera Fusion for 3D Object Detection.
[5]Chang, S., Zhang, Y., Zhang, F., Zhao, X., Huang, S., Feng, Z., & Wei, Z. (2020). Spatial attention fusion for obstacle detection using mmwave radar and vision sensor. Sensors (Switzerland), 20(4), 1–21. https://doi.org/10.3390/s20040956
[6]Kowol, K., Rottmann, M., Bracke, S., & Gottschalk, H. (2021). YOdar: Uncertainty-based sensor fusion for vehicle detection with camera and radar sensors. ICAART 2021 - Proceedings of the 13th International Conference on Agents and Artificial Intelligence, 2, 177–186. https://doi.org/10.5220/0010239301770186