The ego state component processes GNSS, IMU, vehicle odometry, and lidar data to provide highly accurate ego vehicle position, orientation, and motion. This is crucial for all other perception features like dynamic objects or road boundaries.
The road component extracts lane marking and road boundary features from lidar data. The detected semantic lanes are then provided as a highly accurate, georeferenced map according to the OpenDRIVE standard.
Drivable free space
The free space component combines the fields of view of any number of lidar sensors for arbitrary height levels. It removes dynamic object and road boundary information to provide accurate information about potentially drivable areas. The detected free space is encoded in WKT (Well-Known-Text) format, making it compatible with all modern GIS programs.
The objects component detects traffic participants such as cars and pedestrians from the lidar data and tracks their position, dynamics, and classification, including confidence estimations. All object information is provided in JSON format for easy integration in any toolchain.
Shape of ground
The ground detection module accurately determines ground information and simultaneously provides a confidence estimation at each location.
Traffic Lights and Signs recognition
The new camera perception component detects and classifies traffic signs and lights in camera data. In a subsequent fusion step, these camera-based detections are combined with our lidar point cloud to obtain accurate 3D object positions. The found traffic signs and lights are then provided as part of our georeferenced OpenDRIVE map.