Image Labeling

Label Images

1Does image labeling require human input?
Yes, image labeling has to be done by humans, however, AI-assisted tools can speed up the process. Some of these AI tools need to be trained on already labeled images to make label suggestions for unlabeled images. While some of the other AI tools, such as smart labeling tool or labeling by similarity, can suggest labels without any training.
2What can I do once I have finished labeling?
Labels can be downloaded as a .json file to then train a machine learning model yourself.
3What are the benefits of a paid subscription for image labeling?
All features included into paid plan are also available in free plan, but in smaller quantities. For example, paid users can label more images, share a project with more users, use more predictions and label time, etc.

Image Labeling Tool

1What object labels can be assigned to an image?
The image labeling tool by SentiSight.ai allows users to draw bounding boxes, polygons, bitmaps, polylines and points. Any images labeled by these are marked with a symbol.
2What can the object labels be used for?
Object detection (available on SentiSight.ai), segmentation (custom project only) model training as well as for in-house model training after being downloaded as a .json file.
3 What does the “synchronise labels” setting do?
This setting means that the object labels added then label the image for classification with that same label.

Labeling Tips

1What is a bounding box?
A bounding box is a simple rectangle shape that surrounds the labeled object within the image. Fixed aspect ratios can be set in the settings.
2What is points?
Points are often used for labeling very small objects within the image.
3What are polygons?
A polygon is a complex shape made up of multiple points connected by lines. An object may consist of multiple polygons and/or holes.
4How can the polygon label be used best?

To draw a polygon, press Enter or New object (shortcut N). The following click will start a new object. Add or move a vertice by selecting a polygon and clicking Edit (shortcut E). To move a vertex click on it and drag. You can add a new vetice by clicking the projection of a new vertex when close to the border of an image.

Create a new separate polygon by pressing New polygon in group (shortcut G) which will then be part of the same object. This is handy for labeling multipart objects.

To create a hole click New hole (shortcut H) which can be edited just like polygons. Parts of the image covered by polygons are not part of the object the hole belongs too.

5How can the polylines label be used best?

Polylines are a selection of points connected by lines, useful for labeling parts that are defined by structure and not the exact shape.

To draw a polyline, press Enter or New object (shortcut N). The following click will start a new object. Points can be moved by selecting the polyline and clicking Edit (shortcut E). To Add points click Add points (shortcut K), the new point will be connected to the previous. To remove points click on a specific one and Press Remove points (shortcut R). The polyline will redraw itself, remaining connected.

6How can the bitmap label be used best?

A bitmap is a freeform hand-drawn mask used to label objects that are complex in shape.

Bitmaps are drawn with a paint-like brush and do not have to be connected in any way to form an object. To draw a bitmap, press Enter or New object (shortcut N). The following click will start a new object. To add an existing bitmap, select and press Draw (shortcut E). To change from a brush to the eraser, select a bitmap and press Erase (shortcut R). To fill the mask so that it becomes solid, Press Fill (shortcut G) and click inside the closed bitmap shape.

Shared Features

1What is the occluded icon?
This icon is for information purposes only, no influence on model training. Selecting the occluded feature will mark the border of the object with a dotted line.
2What is the visibility icon?
This will hide the object label so it is easier to see and edit other objects. “Hide all objects” and “Show all objects” can also change the visibility of the objects within an image.
3What does Keypoints achieve?
Keypoints will mark the position of important points and can be added to polygons and bounding boxes. Keypoint templates can be saved for a specific object label and will be implemented automatically when an object with the same label is selected (e.g. “left eye” to a bounding box labeled “Face”).
4What is Rasterization?
The Rasterize button will convert the bounding boxes and polygons to bitmaps. Note: this will clear all keypoints and attributes from the object.
5How to convert a bitmap to a polygon?
Use the “Convert to polygon” button. Users can choose a very detailed polygon conversion or a simpler polygon using “Simplify polygons converted from bitmap”.

Smart labeling tool

1What is the Smart labeling tool?
A powerful tool allowing complex bitmap masks to be created by separating the background from objects. Just mark a few points in the foreground and background.
2How do you use the smart selection tool?
  1. Select a rectangular area of work for the tool
  2. Mark foreground. A few lines is enough for clear, contrasting backgrounds in the image. For regions with complex colors that may blend with the background it might be better…
  3. Mark background.
  4. Click Extract so that the tool marks the foreground. Ensure all desired parts of the image are covered.
  5. Repeat steps 2-4 until you are happy with the outcome.
  6. Press Done once you are finished. The bitmap tool can be used to make additional touch ups.
Until Done is clicked, no work will be saved.

Classification labeling

1How do you add a classification label to images during their upload?
Users will be presented with a dialog box in which one or more comma separated labels can be entered into the label field if labels have not already been created. During ZIP upload, classification labels can be selected depending on the name of the folder they belong to.
2How do you add a classification label to images using the image label panel?
After uploading images a classification label can be added using the label panel on the left side. To create a new label, click the “label field”, enter your label and click Add.
3How do you change a default classification label?
Click on the label of interest using your cursor.
  1. Select images that have the desired label, and label these again using ‘+’ with that label. The default label will then change for all of those images.
Click the drop-down arrow next to the label, and then click the star icon.
4How do you upload labels as a .CSV?
The first field for each row should read as the filename, and other fields should be the image labels. For multi-label classification, the number of fields in each row may vary due to each image having a different number of labels. Here is an example of an labeled image upload as a .CSV file.
5How do I upload image labels as a .JSON?
Fields have to be filled out for classification, bounding box, polygon, polyline and point. Click here for more information on how these fields should look.
6How do you upload a color bitmap as a .PNG?
A bitmap image ready for uploading should have a black background with different colors representing different objects.
7How do you upload black/white images as a .ZIP?
Each image should contain objects colored white and a black background. The zip folder named ‘bitmaps’ needs the following structure: image_name/label_name/object_bitmpa.png See this example.

Project sharing & user management

1How do you share a project?
Click the “Share project” button on the right of the project selection menu. Then, enter the email of the user you would like to share with. That user must already be registered on the SentiSight.ai platform.
2What can be controlled with the User permissions function?
Users can be added or removed from projects, roles changed or removed and ownership of the project switched.
3How do you change role privileges?
From the User permissions dialog, click Edit roles and then choose the role to edit. Then, click the Edit button right of the drop-down menu where you manage permissions. Click Save when finished.
4How do you view the labeling time of users in your project?
On the main page open “User profile” and click “See times” beside “Labels made”. Two tables with data will appear for the chosen time period; one providing the time for each day and project and another for the total labeling time for each project. Tables can be filtered by a specific user or project.

Filtering tools

1What are the 3 ways images can be filtered?
Images can be filtered by type, labels and image status.
2What will filtering images by type show?
The type filter can check whether images contain various filters such as single/multiple labels, labeled by you or a different user.
3What will filtering images by label show?
This will filter images based upon the classification and/or object labels they have. Make use of “Not”, “Or”, “And” to help review images of a specific label.
4What will filtering images by image status show?
For shared projects you can filter projects images by status, helpful for keeping track of other users progress. Use the menu in the panel on the left to check which images have been labeled or not yet seen.

Image Similarity Search

1Do I need to label images or train a model for image similarity search?
No, unlike other algorithms like image classification and object detection, the image similarity search tool does not require either image labeling or model training; you can use the image similarity search tool straight out-of-the-box!
2Is the image similarity search tool online or offline?
The image similarity search tool can be used either online or offline. If you want to use the tool online, you can do so via the sentisight.ai web platform, or via our REST API server. If you would like to use the model offline, you will have to download the model and set up your own REST API server yourself.
3What can I use the image similarity search tool for?
There are two types of Image Similarity Search you can perform using SentiSight.ai:
  • 1vN that finds similar images to a single query image
  • NvN that finds most similar image pairs in your data set
4What image can I use for the 1vN image similarity search?
The query image needed for 1vN image similarity search tool can either be uploaded from your computer or selected from your data set of images already on the SentiSight.ai platform. To perform a 1vN image similarity search from an image within your data set, right click on the image and click ‘image similarity search’ from the dropdown menu.
5How do I start using the image similarity search tool?
Whether you are using the 1vN or NvN tool, you will first need to create a project and upload a selection of images that will be used as a data set for your image similarity search queries, however there is no need to label these images before using the image similarity search tool.
6What does the score threshold mean?
When using the NvN image similarity search tool, a score threshold will determine what is the minimum similarity score for the matching image pairs to be displayed.
7Can I customise the image similarity search thresholds?

In all types of image similarity search, you can optionally specify the maximum number of results to be displayed, as well as the similarity score threshold. Additionally, to reduce the search space, users can specify one or more labels to filter the images in your data set prior to image similarity search.

Users can choose between “and” and “or” operators when filtering images by labels. All of these options can be specified as GET parameters when formatting the URL for the request.

8How do I use the image similarity search tool via SentiSight.ai REST API?
To use the image similarity search via SentiSight.ai REST API, you will need;
  • API token
  • Project ID
Both of these can be found under ‘user profile’ menu tab

To use the 1vN similarity search you will need an image file, either from an existing project or stored in your computer.

For detailed information in multiple coding languages on using image similarity search via REST API, please refer to our user guide.

9How do I set up image similarity search to use offline on my own REST API server?

To use the image similarity search offline, you will have to download an offline version of image similarity search model. To do so, click on “pre-trained models -> Image Similarity Search -> Download Model” Once the model has downloaded, follow the instructions in the readme.md to set up your local REST API server.

Note that the REST AOU server must be run on a Linux system, but the client devices can run on any operating system. For more information regarding offline image similarity search, please visit our user guide.

10How much is the image similarity search tool to use offline?
Users can enjoy a 30 day free trial of the offline version of the model. After that trial period, you will have to buy a licence from us, with three options available; slow medium and fast. The licence price depends on the selected speed, with the free trial running on the slow speed mode. For full pricing, please visit our pricing page.

Image Classification

1What is Image Classification?
Image classification is the type of model training that predicts whether an image belongs to a certain category or not. The categories can be either physical objects, like “dog”, “cat”, or abstract concepts, like “summer”, “winter”. The classification does not specify the location of the object or a concept in an image, it only predicts the presence of it.
2How do I create and use an Image Classification model?
The process is relatively straightforward;
  1. Upload your images
  2. Label your images with objects or concepts you want the network to learn to recognize
  3. Train your model using the SentiSight.ai platform
  4. Use the trained model to make predictions on new images.
3Do I need to label images to train an Image Classification model?
To train a classification model, you will need labeled images. You can either upload your own labeled images, or use the SentiSight.ai web interface to label the images you will need. For more information on labeling images for image classification, please visit to uploading and labeling images on user guides.

Uploading and Labeling Images

1What is the difference between single label and multi-label classification?
As the name suggests, a single label classification model will label an image with the class that it belongs to. For multi-label classification, it specifies more than one class for the image. While in single-label classification, the model predicts one of the specified classes that has the highest probability, a multi-label model predicts all of the specified classes that were identified with some probability higher than the set threshold.
2How do I create a new label to add to the images?
Using the label panel on the left hand side, to create a new label click on the label field, enter the label name and press add. To add this label to images, select the images and press ‘+’ next to a particular label.
3How do I remove a label from an image?
To remove a label, select an image and use the ‘-’ button on the side panel, or press ‘x’ button on a label.
4What is a default label, and how do I set it?
Where images have more than one label, one of the labels is called the default label, and it is encircled in white. Default label will be used to train the single-label classification model. There are three ways to change which label is the default label;
  1. Click on the label of interest on the image, using your mouse
  2. Select some images that already have the label of interest, and label these again using the ‘+’ button using the same label. This will change the default label for all of those images.
Click on the drop-down arrow next to the label and select the star icon. The default label will then change for all images that are assigned that label.
5How do I upload labels prepared using other tools?
If you want to upload image labels to the SentiSight.ai platform that have been annotated using other tools, click the ‘Upload labels’ button on the left panel and choose the type of file to upload.
  • CSV: The first field in each row should read the filename of the image, and the remaining fields should be ‘image labels’.
For multi-label classification, each row may have a different number of fields due to each image having a various number of labels. Here is an example of a .CSV upload.
  • JSON file should have these fields:
    • name - image name
    • mainClassificationLabel - single label that acts as the image's default label for the purpose of single-label model training
    • classificationLabels - array of assigned classification labels
    • <add an example JSON file from image labeling user guide

Training your classification model

1Once I have uploaded my labeled images, how do I start training classification models?
To begin training models, click on ‘Train model’ and select either ‘Single-label classification’ or Multi-label classification’. Set any desired training parameters (or leave as default) and then click ‘Start’. Track progress at the top of your screen.
2How do I know whether I should train a single-label or multi-label classification model?
  • Single-label classification: best-suited when each image contains a single object/concept. For instance, differentiating between multiple dog breeds (Bulldog, German Shepherd, Poodle etc). In this case, the image should only contain one dog.
  • Multi-label classification: best-suited for when an image contains multiple objects or concepts. For instance, identifying several different animals within the same image (e.g. dog, cat, chicken, pig etc.). Also, multi-label is ideal for recognizing mutually-exclusive abstract concepts within the same image (e.g. expression, skin color, gender of a person).
3What are the additional parameters for advanced users?
  • Validation set size (%): a split between training and validation of images.
  • Use user-defined validation set - instead of the model using automated percentage split, use images marked for validation. Images can be marked for validation by using ‘add to validation set’ option on the right-click menu.
  • Learning rate: modifies the rate at which the model weights are updated throughout the training.
4How can I get my classification model to train one class?
Normally, a classification model requires a minimum of two labels. Here is how you train a one-class model:  
  1. Upload images containing a label which requires classification.
  2. Upload a group of ‘background’ images which should not contain the chosen object for classification. Incorporate a similar diversity of background images that are expected to be replicated within the production usage of this model.
  3. You can then begin training, selecting single-label classification.
  4. Once the model has finished training, click ‘View training statistics’, then ‘Show predictions’. You will see images classified as ‘background’ or your label.
5Should I use the “best model” or “last model” when analyzing performance?
‘Best model’ is usually employed alongside large datasets, however if your dataset lacks diversity and is small in sample size then you should consider using the ‘last model’. This is because the best model is selected based on the performance on the validation set.
6How can I view predictions on train/validation sets for single-label and multi-label classification models?
Users will most likely want to verify the model's accuracy with their own eyes. To view the predictions performed on the image data set, click ‘Show predictions’ in either Train or Validation set. Typically, Validation predictions will offer the truer reflection of a model’s accuracy seeing as they are excluded from the training.  
  • Single-label prediction percentages of all labels add up to 100%.
  • Multi-label predictions have a minimum score threshold. If this threshold is exceeded then the prediction is known as ‘positive’, and ‘negative’ if the prediction falls below the threshold.
In the basic view, users can only view predictions that exceed the threshold. To view negative predictions, select the advanced view. Green is used for ground truth labels and red is used for non-ground truth labels. A ground truth label with a predicted score above the threshold and a non-ground truth label with a predicted score beneath the threshold are considered correct. All green predictions below the threshold, and similarly all red predictions above the threshold are categorised as incorrect. The threshold can be altered to your desired value in the advanced view. The default value is calculated to maximise the model’s performance.
7How can I download predictions on train/validation sets?
To download your predictions, select Download Predictions, which will download a .zip file including the resized images, ground truth labels and predictions.
8How should I analyse the learning curve?
The advanced view provides validation curves, including a learning curve for accuracy, and a learning curve for cross-entropy loss. The red line depicts when the best model was saved. This is typically selected based on the lowest cross-entropy loss on the validation set. If your validation lacks diversity and is small in size then it might be worth to consider using “the last” model instead of “the best” model.
9What does it mean by the confusion matrix?
The confusion matrix is presented when viewing advanced single-label statistics and shows how frequently an image is classified as the right or wrong label.
10What can the advanced multi-label statistics show me?
This allows you to examine the model’s cross-entropy loss (lower the value the better), Accuracy split into Image-wise and Label-wise and the ROC AUC (Area Under the Receiver Operating Characteristic Curve (shown as %)).
11What can I learn from the precision-recall curve?
The advanced view allows you to view your model’s precision-recall curve. Precision-recall curve shows how precision and recall values change based on selected score threshold. You can select this curve to show results for specific labels or leave it as default which presents statistics for all classes. Precision = % of true positive predictions out of all positive predictions that were made. Recall = % of correctly predicted instances of some class out of all instances of that class in the data set. F1 represents the harmonic mean of the precision-recall curve. The score threshold determines which predictions are considered “positive”. Only the predictions whose score is above the score threshold are considered positive. By default the score threshold is optimized to maximize F1 value, and it is visualized by the red dashed line on the precision-recall curve You can enter your own threshold by unchecking ‘Use optimized thresholds’ and clicking anywhere on the precision-recall curve or entering a value into the text box. Note: score thresholds change simultaneously both for training and validation sets.

Making Predictions

1How do I use the model to make predictions?
There are three ways to make predictions with SentiSight.ai model:
  • Using the web interface. This is the easiest and fastest way to test your model but it’s not suitable if you want to automate the process.
  • Using our online REST API. The idea is that you train the model using our web interface and then send requests with your images to our online REST API server to get back the predictions for those images. You can send the requests from any operating system, even from mobile devices, as long as they are connected to the internet.
  • Downloading the offline model and setting up your own REST API server. In this case, the REST API server has to be set up on a Linux operating system and you will need an nvidia GPU card to reach the maximum speed. The client devices can still be run on any operating system or mobile devices. If everything is correctly set up, this option has a potential to reach a faster speed than the online web interface or the online REST API.
2How can I make predictions using the web-interface?
SentiSight.ai web interface is the simplest way to use a classification model.
  • For new images: open predictions window by either clicking ‘Make a new prediction’ button in the Trained Models dropdown, or clicking ‘Make a new prediction’ in the Model statistics window.
  • For existing images: right-click on an image in the project and select Predict, then choose your preferred model.
3Can I use the model offline?
You can download the model to use offline by setting up your own REST API server with a model trained on SentiSight.ai. To do so, you will need to download an offline version of the model, which can be achieved by clicking on the Download model button in the ‘View training statistics’. Please, note that the REST API server has to be set up on a Linux operating system and you will need an nvidia GPU card to reach the maximum speed. On the other hand, you can make the requests to your local REST API server from any operating system or mobile device.
4How can I make predictions via REST API?
Make predictions using your chosen language via REST API. This allows for an automation of the process and an ability to make predictions from an app/software in development. This mode requires 3 details:
  1. API token (under ‘User profile’ menu tab)
  2. Project ID (under ‘User profile’ menu tab)
  3. Model name (under ‘Trained models’ menu)
Add this header: "X-Auth-token: {your_api_token}"

Use this endpoint:

https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/

If you prefer to assign the ‘last model’ checkpoint for making your predictions use this:

https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/last

5How can I set up my own REST API server to use the model offline?
Setting up your very own REST API server with a model trained using SentiSight.ai’s platform will allow for offline use.
  1. Download an offline version of the model: click Download model on the ‘View training statistics” webpage.
  2. Follow the instructions in README.html to successfully set up your own REST API server (Note: this server runs only on linux operating system). Client requests can be made either on the same PC so the model can run offline or on any other device (e.g. mobile). Client devices can run any operating system.
You will be able to run the offline model version for 30 days, afterwards you will need to buy a license.
6Should I use multiple single label models, or use one multi-label model?
To increase prediction accuracy, using multiple two-class single-label classification models instead of a multi-label classification model can be advised. Whilst this can increase accuracy, it is at the expense of having to train and use several models instead of one. For more information on using multiple one-class models, please refer to (link to for advanced users: multiple one-class models vs multi-label model)

Object Detection

Basics of Bounding Box labeling

1Do I need labeled images before I can train an object detection model?
Yes, you can either provide pre-labeled images, or you can use the SentiSight.ai range of image labeling tools. The images need to be labeled with the objects that you want the neural network to recognize.
2What are bounding boxes, and why do I need them for object detection?
Bounding boxes will be the only labeling tool necessary for the purpose of object detection labeling. The objects marked by bounding boxes will be used for training the neural network model.To create a bounding box around the object that you would like to label, click on either the bounding box icon on the labeling tool’s toolbar or press B. Then, drag a bounding box around the part of an image which contains an object you want to train the model on.
3How can I speed up the image labeling process?

It is a good idea to label the images for classification as you upload them, because the classification labels will be the suggested label names for the bounding boxes so it will increase the speed of bounding box labeling, provided that classification labels and object detection labels match.

Additionally, you can use hotkeys 1-9 to quickly change the label of selected object to one of existing labels. You can see which hotkey corresponds to which label in labeling settings (see "Labeling tool setting" section below). There you can also assign label names to hotkeys.

Note that if you don't want to label all the images in the project, you can either select a number of them, or use filters—labeling tool will ignore images that are not selected or are filtered out. By default, labeling tool will iterate through all of the images.

Selecting Parameters

1Can I manually set the training parameters myself?

Basic users can set two parameters:

  • The model training time
  • The time after which the model would stop training if there is no improvement in the performance

The above parameters are usually enough to train a good model. However, if you are an advanced user, you might want to set some additional parameters. To access these additional parameters, turn on the advanced view. The parameters that you will be able to select and customize include;

  • Use User-defined validation set
  • Change the validation set size percentage
  • Learning rate
  • Model size (small, medium or large)
2How do I decide between selecting a small, medium and large model?
It is usually recommended to select the large model, as the train time difference between small, medium and large is negligible, but the accuracy is often higher for larger models. If you are prioritising recognition speed rather than accuracy, you should go for a smaller model.

Training Object Detection Model

1When can I start training my own object detection model?
Once you have uploaded and labeled the images, you can start training the models. To do this, click on Train Model and then choose object detection. Here, you can set the model name, training time, and the stop time which determines for how long the model is going to continue training if there is no improvement. For more information on training your object detection model, please visit training your object detection model user guide
2How long does it take to train an object detection model?

The standard training time for an object detection model is significantly longer than that for a classification model.

The default training time for object detection models depends on the number of different classes in the training set (1-2 classes: 2 hours, 3-5 classes: 3 hours, 6-10 classes: 6 hours, 11+ classes: 12 hours).

Analysing Learning Curve

1Can I analyse the learning curves of the models?

In object detection model training you can check the learning curves at any time, to see how the training is going. You can also decide to stop the training early if you do not see any improvement in the learning curves.

After the model is trained, you can find the final learning curves in the model info window.

For more information on learning curves, please visit analyzing the learning curve and early stopping the training

Analysing statistics and predictions

1How do I analyze the model’s performance?

After the model has been trained, you can view the model’s performance by clicking on View training statistics from the “Trained models” menu. You can also click Show predictions to see the actual predictions for specific images, for either the train or validation set.

For more information on model statistics and predictions, please visit analyzing the model’s performance

2What is the difference between the Training and Validation statistics?
Training tab contains the statistics on the images that were used for the training, while the Validation tab contains the statistics for the images which were not used for the model training. Each tab contains the label count and global statistics, such as Accuracy, Precision, Recall, F1 and mAP. You can find the definition of each statistic by clicking a question mark next to it.
3What determines if a prediction is judged to be correct?
A prediction is judged to be correct if the predicted bounding box sufficiently overlaps with the ground truth bounding box. The amount these two boxes overlap is measured by a so-called "Intersection over Union (IoU)" measure. By default for the prediction to be considered correct, the IoU should be more than 50%, but you can change this threshold.
4What is the difference between the best model and the last model, and which should I choose?
We usually recommend using the ‘best model’ but if your validation dataset is small and lacks diversity, you might consider using the last model instead. This is because the best model is selected based on the performance on the validation set.

Analysing precision-recall curve

1Analysing precision-recall curve
The score threshold determines when the prediction is considered positive and a bounding box is drawn. For example, if the score threshold is 50%, all bounding boxes whose score is above 50% are drawn. When you increase the score threshold, fewer bounding boxes will be drawn, but they will be more likely correct, thus increasing the precision. On the contrary, when you decrease the score threshold, more bounding boxes will be drawn, each of which will be less likely to be correct, but they will cover a larger amount of ground truth bounding boxes, thus increasing the recall. By default, SentiSight will use optimized score thresholds that maximize F1 value for each class. F1 is the harmonic mean between precision and recall.

Changing score thresholds

1Changing score thresholds

The score threshold determines when the prediction is considered positive and a bounding box is drawn. For example, if the score threshold is 50%, all bounding boxes whose score is above 50% are drawn. When you increase the score threshold, fewer bounding boxes will be drawn, but they will be more likely correct, thus increasing the precision. On the contrary, when you decrease the score threshold, more bounding boxes will be drawn, each of which will be less likely to be correct, but they will cover a larger amount of ground truth bounding boxes, thus increasing the recall.

By default the score threshold is optimized to maximize F1 value, and it is visualized by the red dashed line on the precision-recall curve You can enter your own threshold by unchecking ‘Use optimized thresholds’ and clicking anywhere on the precision-recall curve or entering a value into the text box.

Note: score thresholds change simultaneously both for training and validation sets.

Making Predictions

1How do I use the model to make predictions?
There are three ways to make predictions with SentiSight.ai model:
  • Using the web interface. This is the easiest and fastest way to test your model but it’s not suitable if you want to automate the process.
  • Using our online REST API. The idea is that you train the model using our web interface and then send requests with your images to our online REST API server to get back the predictions for those images. You can send the requests from any operating system, even from mobile devices, as long as they are connected to the internet.
  • Downloading the offline model and setting up your own REST API server. In this case, the REST API server has to be set up on a Linux operating system and you will need an nvidia GPU card to reach the maximum speed. The client devices can still be run on any operating system or mobile devices. If everything is correctly set up, this option has the potential to reach a faster speed than the online web interface or the online REST API.

Downloading model or using it online

1Can I use the model to make predictions via REST API?
Yes, you can use your own preferred scripting language via REST API, allowing you to automate the process and make predictions from an app / software you are developing. We provide code samples for several programming languages, including cURL, Python, Java and Javascript.
2Can I integrate the model into an app / software that I am developing?
Yes, via our REST API you can make predictions using your own app or software on any operating system or mobile device.
3What will I need to make predictions via REST API?
To begin using your trained model via REST API you will need these details:
  • API token (available under "User profile" menu tab)
  • Project ID (available under "User profile" menu tab)
  • Model name (shown in many places, for example, under "Trained models" menu)
Follow the instructions in our user guide for more info.
4Can I use the model offline?
You can download the model to use offline by setting up your own REST API server with a model trained on SentiSight.ai. To do so, you will need to download an offline version of the model, which can be achieved by clicking on the Download model button in the ‘View training statistics’. Please, note that the REST API server has to be set up on a Linux operating system and you will need an nvidia GPU card to reach the maximum speed. On the other hand, you can make the requests to your local REST API server from any operating system or mobile device.
5How can I set up my own REST API server to use the model offline?
Setting up your very own REST API server with a model trained using SentiSight.ai’s platform will allow for offline use.
  1. Download an offline version of the model: click Download model on the ‘View training statistics” webpage.
  2. Follow the instructions in README.html to successfully set up your own REST API server (Note: this server runs only on linux operating system). Client requests can be made either on the same PC so the model can run offline or on any other device (e.g. mobile). Client devices can run any operating system.
  3. You will be able to run the offline model version for 30 days, afterwards you will need to buy a license.