Skip to content Skip to footer

Bounding Boxes (2D & 3D)

CLICK HERE

Bounding box annotations are one of the most common techniques used in image annotation. They involve drawing rectangular boxes around objects of interest. In 2D annotations, the bounding boxes provide information about the object’s position and size within a single image. In 3D annotations, bounding boxes can represent the object’s position, size, and orientation in three-dimensional space.

Polygons

CLICK HERE

Polygon annotations are used to annotate objects with irregular shapes. Instead of using rectangular bounding boxes, polygons define the exact contours of objects. This technique is commonly employed for objects such as vehicles, buildings, or natural landscapes.

Polylines

CLICK HERE

Polylines are annotations used to annotate linear objects, such as roads, rivers, or boundaries. Unlike polygons, polylines do not enclose a specific area but rather define the shape and direction of lines.

Semantic segmentation

CLICK HERE

Semantic segmentation annotations assign a class label to each pixel within an image. This technique enables pixel-level understanding and accurate delineation of object boundaries. It is widely used in applications like autonomous driving, medical imaging, and scene understanding.

Keypoint annotations

CLICK HERE

 Keypoint annotations involve identifying and labeling specific points of interest within an image. These points represent critical landmarks or features, such as joints in human pose estimation or facial keypoints for emotion recognition.

LiDAR & RADAR

CLICK HERE

 LiDAR and RADAR annotations are specific to sensor data annotations in autonomous driving. LiDAR annotations involve labeling point clouds to detect objects and estimate their 3D position, while RADAR annotations are used to annotate radar data for object detection and tracking.

Multisensor

CLICK HERE

 Multisensor annotations involve combining annotations from multiple sources, such as images, LiDAR, RADAR, or other sensors. By using data from different sensors, a more comprehensive and accurate understanding of the environment can be achieved.