In order to train an image recognition model, we first need to have labeled images. The image labeling requires manual human work. Human labelers have to look through the images and mark particular objects that they can see in the images. After images are labeled we can train a model to predict such information about objects inside the images without any human help.
The three most common types of image labeling are image classification, object detection and semantic/instance segmentation. You should choose the type of image labeling based on what type of output you expect to get from the model once it is trained. For example, if you would like the model to detect objects, you should label the images for object detection.
Please, note that you can use SentiSight.ai for all three types of image labeling, however, an automatic model training is currently only available for classification models. If you would like to have an object detection or semantic/instance segmentation model, you should contact us for a custom project. Alternatively, you can use SentiSight.ai platform just for labeling, after that download object detection/segmentation labels as a .json file and train a machine learning model yourself.
How to use the SentiSight platform for image labeling
- Labeling for image classification
If you plan to do image classification, you can label the images already during the image upload – just write one or more comma separated labels into a label field. Alternatively, you can add or adjust image labels after the upload using the panel on the left screen side. Select some images and press '+' to add label or press '-' to remove the label. You can also adjust the label name, by clicking on it with a mouse, entering the new name and pressing 'Enter'.
In case an image has more than one label, one of those labels is called the "default" label and it is encircled in white. This is the label that will be used if you train a single-label classification model on images with multiple labels.
You can change which label is the "default" one, by clicking on the label of interest with a mouse. Alternatively, you can select some images that already have the label of interest, and label them again using the '+' button with the same label. The default label will change in all of those images.
- Labeling for object detection or image segmentation
- Select a group of pictures that you want to label
- Choose label group: select Object detection or Segmentation on the left menu
- You will now see the first image and the necessary tools to draw a bounding box or contours.
- Try it! Add a new object and draw your chosen figure (label). You will also find some hints, look for blue question marks!
- Write a new name for it or choose from existing labels.
- You get to the next picture from your selection by pressing Next.
- Uploading annotation files for image classification
Sometimes it might happen that you already have image annotations that were prepared with some other tools and you want to upload those annotations to SentiSight.ai platform. In that case, you need to save image annotations into a format suitable for SentiSight.ai platform.
For image classification annotations we use a format that is similar to Comma Separated File (.csv), which is used only for classification labels, or JSON (.json) format, which is also used for object detection and segmentation labels. Here we will look at CSV file format. The first field in each row should be the filename of the image, and the rest of the fields should be the image labels. Fields should be separated by commas. The file extension in the first field can optionally be skipped. In case of multi-label classification, the number of fields in each row might be different, because each image might have a different number of labels. Here is an example of an annotation file for image classification as CSV. See below regarding JSON files.
- Uploading annotation files for object detection or image segmentation
Similarly, it is also possible to upload existing annotation files for classification, object detection and segmentation. In this case, we use .json format. Here are the example annotation file for classification, object detection and segmentation. Below is a rough description of the fields in the .json file.
- Name - Image name
- classificationLabels - JSON array of assigned classification labels
- mainClassificationLabel - single label that acts as the image's default label for purposes of single-label model training
Object Detection annotation fields:
- Name - Image name
- Boundaries - JSON array of bounding boxes
- x0, y0 - coordinates of the top left corner of the bounding box
- x1, y1 - coordinates of the bottom right corner of the bounding box
- Label - label of the bounding box
Segmentation annotation fields:
- Name - Image name
- Polygon Groups - JSON array of groups of polygons that share the same label
- Polygons - JSON array of polygons
- Points - array of points that define a specific polygon
- x, y - coordinates of a point.
- Hole - a True/False value. True if the polygon is defines a hole, False if otherwise. This parameter is optional, value "FALSE" is taken as default.
- Closed - a True/False value. True if the polygon is closed, False if otherwise. This parameter is optional, value "TRUE" is taken as default.
- Label - label of the polygon group