效果图
first_sample_token = my_scene['first_sample_token']
my_sample = nusc.get('sample', first_sample_token)
my_sample['data']
输出
{'CAM_BACK': '03bea5763f0f4722933508d5999c5fd8',
'CAM_BACK_LEFT': '43893a033f9c46d4a51b5e08a67a1eb7',
'CAM_BACK_RIGHT': '79dbb4460a6b40f49f9c150cb118247e',
'CAM_FRONT': 'e3d495d4ac534d54b321f50006683844',
'CAM_FRONT_LEFT': 'fe5422747a7d4268a4b07fc396707b23',
'CAM_FRONT_RIGHT': 'aac7867ebf4f446395d29fbd60b63b3b',
'LIDAR_TOP': '9d9bf11fb0e144c8b446d54a8a00184f',
'RADAR_BACK_LEFT': '312aa38d0e3e4f01b3124c523e6f9776',
'RADAR_BACK_RIGHT': '07b30d5eb6104e79be58eadf94382bc1',
'RADAR_FRONT': '37091c75b9704e0daa829ba56dfa0906',
'RADAR_FRONT_LEFT': '11946c1461d14016a322916157da3c7d',
'RADAR_FRONT_RIGHT': '491209956ee3435a9ec173dad3aaf58b'}
cam_front_data = nusc.get('sample_data', my_sample['data']['CAMERA_FRONT'])
radar_front_data = nusc.get('sample_data', my_sample['data']['RADAR_FRONT'])
nusc.render_sample_data(my_sample['data']['RADAR_FRONT'])
效果图
- 转化原理:NuScene类方法:map_pointcloud_to_image
由于sample是在某个时间上的数据集合,其中包含sample_annotations、sample_data,分别是里面目标的标注和传感器在此时刻的信息,但是雷达采样频率和相机不一样,会差几毫秒或者更少,因此不仅要在空间上转化,还要考虑时间。
- 雷达传感器下坐标转到雷达时间戳下的自身坐标系(雷达点采集时间属于雷达的时间戳)
cs_record = self.nusc.get('calibrated_sensor', pointsensor['calibrated_sensor_token'])
pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
pc.translate(np.array(cs_record['translation']))
- 转换到全局
poserecord = self.nusc.get('ego_pose', pointsensor['ego_pose_token'])
pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
pc.translate(np.array(poserecord['translation']))
- 转换到相机时间戳下的汽车自身坐标
poserecord = self.nusc.get('ego_pose', cam['ego_pose_token'])
pc.translate(-np.array(poserecord['translation']))
pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T)
- 转换到相机坐标系下
cs_record = self.nusc.get('calibrated_sensor', cam['calibrated_sensor_token'])
pc.translate(-np.array(cs_record['translation']))
pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T)
|