By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Comparison of image annotation types

June 22, 2022

What is computer vision?

Computer vision is a big discipline of AI and machine learning that helps computers to get useful information from digital media such as images and videos, and act or make suggestions based on that information.

Define image annotation

Image annotation is the technology that lets the computer gain a deep understanding by labeling an image using text or annotation tools to show data features the user wants it to recognize on its own.

Most used image annotation types

- Image classification

- Object recognition

- Image Segmentation

- Boundary recognition

Types of image annotation and their use cases

Bounding Boxes 


They are planes that act as a citation for detecting an object. Bounding boxes are used to set a class to the interested object by surrounding the target. This image annotation makes it simple for algorithms to get what they're finding in an image and relate the object detected with what it was trained on at first.

Bounding boxes are also used to teach autonomous vehicles to detect the different objects on the streets.

Use Case

Detecting Objects for self-driving cars. Bounding box data helps machines identify objects on the road and beyond e.g. cars, pedestrian, street signs and lights

Line annotation and splines


They are used to recognize lanes and boundaries in an image. They are used when the annotated part is considered as a boundary but it is small for bounding boxes or other annotations.

Use Case

Can be used to reach robots in the warehouse to accurately put items in a row. Can also be used to annotate sidewalks and lanes for autonomous vehicles to know boundaries and stay in the lane.

Landmark annotation


Involves creating dots or points throughout an image. It uses dots to mark objects in images with many little items. Many objects are connected together to show the outline of an object while larger dots are used to show landmark sites from around areas.

Use Case

Landmark annotation is used to get better accuracy in human figures and sentiment analysis. Can also be used for facial recognition in security systems and in video games to track movements of characters.

3D cuboids


3D cuboids are used to predict the shape and volume of an object not only recognize it. They also have depth, height and width. Anchors are put at the edges of the item and the space between is filled with a line, this will create a 3D representation of the object.

Use Case

3D cuboids are mostly used for autonomous systems capable of locomotion e.g locomotive robots because they can make predictions about the object in its surrounding environment.



Polygons are used when objects in an image don’t fit well in a 3D cuboid or bounding box due to their size or shape, as well as when someone wants more accurate annotation for objects. Polygons cut out unnecessary pixels surrounding the object which can confuse annotators.

Use Case

Polygons are also used to annotate asymmetrical objects within an image, such as vegetation, and houses. In autonomous driving polygons can be used to identify asymmetrical objects such as street signs and also accurately locate cars.

Semantic segmentation


Semantic segmentation involves dividing an image into different areas and setting labels to all the pixels in an image. Areas that have non-identical semantic definitions can be separated from other areas. Areas are defined according to the semantic information, and are given labels to each pixel in that region.

Use Case

Semantic segmentation is used in autonomous vehicles where the AI of the vehicle will differentiate between different sections e.g. road or sidewalk or grass sections.

In the field of medicine, semantic segmentation is used in recognition of images for diagnosis. Semantic segmentation is also used to detect crops and weeds in a farm.

How long does Image Annotation take

The time taken in image annotation depends on various things such as: how complex the image is, how many objects are there, the type of annotation (each annotation differs in complexity) and how accurate or the level of detail.

Simple objects will need less time to annotate compared to complex objects.

You might also like
this new related posts

Want to find out more
about AI as well as our Data Labeling tools and services?

Isahit has a wide range of solutions and tools that will help you train your algorithms. Click below to learn more!