How it Works

Image classification refers to a type of model training that predicts whether an image belongs to a certain category.

The process is fairly straightforward:

First, you need to upload some images.

Then, you need to label them - which will tell the neural network "these images have something in common"

Then, you train your model—SentiSight.ai does this for you.

And finally, you can use the trained model to predict which images belong to which label.

Let's have a closer look at those steps and how you can perform them easily using SentiSight.ai

Uploading and Labeling images

To train a classification model, first you need to upload and label your images.

Click on the red Upload button to upload individual images, or Upload folder or Upload zip buttons to upload large numbers of images. You can also upload your images using our API which you can read more about here.

There are several ways to labels images for classification, you can read about them here.

Default labels

If you are planning to train a single-label model and an image has more than one label, you need to pay attention which of these labels is the "default" label. A default label is encircled in white and is used when training a single-label classification model. There are three ways to change which label is the default one:

  • Clicking on the label of interest on an image with your mouse.
  • Selecting some images that already have the label of interest, and labeling them again using the '+' button with the same label. The default label will change in all of those images.
  • Clicking on the drop-down arrow next to the label and clicking the star icon. The default label will change in all images that have the label.

Uploading labels

If you already have image annotations that were prepared with some other tools and you want to upload them to the SentiSight.ai platform, click the Upload labels button in the left panel and choose the type of file you want to upload.

Note that the file needs to have a suitable format for the platform.

Please see here about formats and guidelines on creating image annotations and uploading them to SentiSight.ai.

Training your classification model

Once you have uploaded and labeled the images, you can start training the models. To do this, click on Train model dropdown on the upper panel and then choose either Single-label classification or Multi-label classification. Then set the training parameters (or leave them at the default values) and start training by simply clicking on the Start button. You can track the progress of the model training at the top of your screen.

train-modeltrain-model

Choosing your training parameters

Why single-label?

Single-label classification model is most useful when each image contains just a single object or concept. For example, it could be used to differentiate between different dog breeds (Labrador retriever, Bulldog, Poodle, German Shepherd, etc.) or different moods of a person (happy, sad, contempt, angry). In the first case the image should contain only one dog and in the second case the image should contain a single human face. You can learn more about selecting the type of the model that suits your needs in our blog.

Training your single label classification model

Click Train, and then choose Single-Label Classification.

Before setting your model to train, you can choose your model name and train time, in minutes.

The Label count shows the labels that will be included to train the model. You can select or unselect the tick boxes next to the labels if you do not want to include them.

Note that if an image has more than one label, only the Default one will be used for training (see Default labels for more information)

Why multi-label?

Multi-label model is most useful when there are multiple objects or concepts in an image. For example, it can be used when you want to identify several different animals in the same image (cat, dog, horse, sheep, etc.). It can also be used when you want to recognize multiple mutually-exclusive abstract concepts in the same image, such as the gender, mood and the skin color of a person. You can learn more about selecting the type of the model that suits your needs in our blog

Training your multi label classification model

Click Train, and then choose Multi-Label Classification.
Before setting your model to train, you can choose your model name and train time, in minutes.
The Label count shows the labels that will be included to train the model. You can select or unselect the tick boxes next to the labels if you do not want to include them.
For advanced users: additional parameters
If you are an advanced user, you can switch to advanced view to see extra parameters that you can set, including;
Validation set size (%) - split between training and validation images. You can mark images for validation, by selecting some images, and selecting "add to validation set" on the right-click menu screen.
Use user-defined validation set - whether the model should use images marked for validation instead of automated percentage split
Learning rate - dictates the rate at which the model weights are updated during the training.
From the advanced view you can also see the estimated training steps and user-defined validation set images.
You will also be able to see the estimated time to calculate ‘bottleneck’ features. This is performed on a GPU, so this will contribute towards your monthly train time limit in your account.
For advanced users: One class model training

A common use case scenario requires a model trained to recognize only one type of object or classification.

A SentiSight model can be trained to do this. However, since a normal classification model requires at least two labels, you must follow the instructions below to train a one-class model.

Main and background images

To begin, first you need a collection of images that have the relevant label that you want to detect / classify.

You will also need to upload a group of ‘background’ images, which are images which do not have the object in the image. For the background images, try to use the same diversity of images that you would expect to have during the production usage of this model

By having these background images, it allows you to start training a model as you have two classes.

From there, click train and then select Single-label classification, then start training your model.

Once the model has been trained, you can click ‘View training statistics’. From there, click on ‘Show predictions’ to see the predictions made on the uploaded images. It will classify the images either as your label, or as ‘background’.

You can watch our video tutorial on one class model training for more details.

For advanced users: Multiple one-class models vs Multi-label model

An advanced method to increase prediction accuracy is to exchange a multi-label classification model for multiple two-class single-label classification models, as described in the previous section.

To do so, apply "background" label to all images, and set it as "default" - see labeling instructions at the top of the page.

Then add specific labels.

Before training a model, set the label you want to train your model on as default too.

Before training subsequent models, remember to re-set "background" as default on all images and then set the label you want to train the model on as default as well.

By performing these steps for each class you will be able to train multiple single-label classification models instead of one multi-label classification model. This sometimes achieves a better accuracy, but at the expense of having to train and to use several models instead of one.

You can watch this video tutorial for more details (second part).

Analyzing the model’s performance

After the model has been trained, you can view the model’s performance by clicking on View training statistics from the “Trained models” menu.

There you can see the ‘Train time’ taken, as well as the time when the Best Model was saved at. The best model is chosen by the classification error on the validation set.

The two tabs, Training and Validation, will show you performance of the model with training and validation image sets, their label count, and global statistics, such as Accuracy, Precision, Recall and F1. You can view the definition of each metric by clicking on the question mark next to it.

You can also click ‘Show predictions’ to see the actual predictions for specific images, for either the train or validation set.

predict-model-infopredict-model-info

Understanding best model

Users are able to choose whether to use the ‘best model’ or to use the ‘last model’. We usually recommend using the ‘best model’ but if your dataset is small and lacks diversity, you might consider using the last model instead. This is because models are at risk of being ‘overtrained’ which means that the cross entropy loss starts to increase as you keep training.

Viewing predictions on Train and Validation sets

To verify the model's accuracy with your own eyes, you can see predictions performed on the project's images.

Click Show predictions in either Train or Validation set. As a general rule, Validation predictions will be most indicative of the model's accuracy, as they are specifically excluded from the training itself.

Single-label predictions are very straightforward. The percentages of all labels add up to 100%, and the predicted label is the one with the highest confidence score.

For multi-label classification, as opposed to single-label classification, there is a minimum score threshold for predictions. If the prediction for the classification is above this minimum threshold, it is considered a positive prediction. If the prediction is below the threshold, it is considered a negative prediction.

You can filter your prediction images by either all, correct, incorrect, above threshold or below threshold.

A prediction is counted as incorrect if even one prediction is incorrect compared to the ground truth label, or if there is at least one prediction that does not match the label at all. In the basic view, you can only see the predictions that are above the threshold. In the advanced view, you can see all of the predictions, including those below the threshold.

If the predicted label is among the ground truth labels, the prediction is highlighted in green. If the predicted label is not amongst the ground truth labels, the prediction is highlighted in red.

When the predicted label is above the score threshold and it can be found among the ground truth labels, the prediction is considered to be correct. When the predicted label is below the score threshold and it cannot be found among the ground truth labels, the prediction is also considered to be correct.

Essentially, a ground truth label with a predicted score above the threshold and a non-ground truth label with a predicted score beneath the threshold are considered correct.

Remember that we use green color for ground truth labels and red color for non-ground truth labels, so all green predictions above threshold and all red predictions below threshold are correct. All green predictions below the threshold, and all red predictions above the threshold are considered incorrect.

The score threshold is calculated to maximize the model’s performance (F1 statistic) on the train set. If you are not happy with the threshold, you can set the threshold yourself in the advanced view.

For advanced users: downloading predictions on train / validation sets

If you would like to download the predictions, you can do so by clicking on Download Predictions, which will download a .zip file that includes the resized images, ground truth labels, and the predictions.

For advanced users: Analyzing the learning curve
In advanced view you can see the validation curves which show a learning curve for accuracy, and a learning curve for cross entropy loss. The red line on the learning curves show when the best model was saved. The best model is selected based on the lowest cross-entropy loss on the validation set. In some cases, (for example, when your validation set is small and lacks diversity) you might not want to trust this estimate, so you could choose to use the "last" model instead of the "best" model.
Advanced single-label statistics

Advanced view for single-label model training statistics allows you to examine the model's Cross entropy loss (where lower values are better) and Matthews Correlation coefficient which is shown as a percentage.

Single-label predictions are rather straightforward, as the scores of all labels on an image add up to 100%.

In addition, you can see the confusion matrix, which shows how often an image is predicted as a wrong label.

Advanced multi-label statistics

Advanced view for multi-label model training statistics allows you to examine the model's Cross entropy loss (where lower values are better), Accuracy split into Image-wise and Label-wise, and ROC AUC (Area Under the Receiver Operating Characteristic Curve (shown as percentage)).

In addition, you can see the Precision-Recall curve and set custom thresholds to further examine the model's accuracy.

Analyzing precision-recall curve and changing score thresholds

In the advanced view, you can see the precision-recall curve for your model. By default, this curve is showing statistics for all classes, however you can change this to show the precision-recall curve for specific labels.

The precision is the percentage of predictions out of all of the predictions that were made. The recall is the percentage of correctly predicted instances of some class out of all instances of that class in the data set. F1 is the harmonic mean of the precision and the recall. The precision, recall and F1 values depend on the selected score threshold.

The score threshold determines when a particular label is predicted to be in the image. For example, if the score threshold a particular label is 50% and predicted score for this labels is above 50% it is predicted that this particular label is present in the image. The score threshold can be set for each label separately, but it is also possible to set the same score threshold for all labels.

When you increase the score thresholds, fewer fewer labels will be predicted, but they will be more likely correct, thus increasing the precision. On the contrary when you decrease the score thresholds, more labels will be predicted, each of which will be less likely to be correct, but they will cover a larger amount of ground truth labels, thus increasing the recall.

By default, we select the score threshold that optimizes F1 score, which is a good compromise. However, if you prioritize either precision or recall, you can adjust the score thresholds for each class manually.

The intersection between the precision-recall curve and the dashed red line represents the response to the optimized score threshold. If you would like to change this threshold, uncheck the ‘Use optimized thresholds’ feature, and then you can enter your own threshold score in the box, or you can change the score threshold by clicking anywhere on the precision-recall curve. Once you have set the new threshold, the performance statistics will update. When setting your own score threshold, you want to choose as high a precision and recall as possible, so if you can increase one without compromising the other, you should always do that.

Please, not that the score thresholds change simultaneously both for train and validation sets. The new score thresholds are also represented by changed vertical dashes in the “View predictions” window.

If you want to set a uniform threshold for all classes, this can be achieved using "Set all” button.


Making predictions

Making predictions using web-interface

The easiest way to use a classification model is through SentiSight.ai web-interface.


This will take you to the predictions page and you will have to click Upload images to choose which images to perform prediction on.

You can click Upload images to start a new prediction without leaving the predictions page.

You can also select a different trained model to launch predictions on from a dropdown menu on the predictions page.

You can download the results either as images grouped by predicted label (as a .zip), in JSON format, or in CSV format.

class-predictclass-predict

Predictions on existing images

Alternatively, you can launch predictions on existing project images.

Select images you want to launch the prediction on, right-click one of the selected images click AI Tools - Predict and then choose desired model.

This will open the predictions page and immediately launch prediction with the selected images on the selected model.

This is the same predictions page as described above, so all instructions regarding launching new predictions and prediction downloads apply.

predict-existingpredict-existing

Making predictions via REST API

Alternatively, you can make predictions by using your preferred scripting language vis REST API, allowing you to automate the process and make predictions from an app/software that you are developing.

To begin using your trained model via REST API you will need these details:

  • API token (available under "Wallet" menu tab)
  • Project ID (available in brackets next to project names)
  • Model name (shown in many places, for example, under "Trained models" menu)

Use this endpoint: https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/

Please note that for each trained model we save two checkpoints: "the best"—the one with the lowest validation error and "the last"—the one from the last training iteration. By default we will use "the best" model checkpoint to make the predictions for queries via REST API interface. However, if you prefer to use "the last" model checkpoint to make your predictions, you can simply add "/last" to the endpoint: https://platform.sentisight.ai/api/predict/{your_project_id}/{your_model_name}/last

Making predictions using an image from your computer

Set the "X-Auth-token" header to your API token string and set "Content-Type" header to "application/octet-stream". Set the body to your image file.

For more details, see the code samples below.

TOKEN="your_token"
PROJECT_ID="your_project_id"
MODEL="your_model_name"
IMAGE_FILENAME="your_image_path"
curl -H "X-Auth-token: $TOKEN" --data-binary @"$IMAGE_FILENAME" \
  -H "Content-Type: application/octet-stream" \
  -X POST "https://platform.sentisight.ai/api/predict/$PROJECT_ID/$MODEL"
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.file.Files;

public class App
{
   public static void main( String[] args ) throws IOException
   {
       if (args.length < 4) {
           System.out.println("Usage: java -jar sample.jar api_token project_id model_name file");
       }
       String token = args[0];
       String projectId = args[1];
       String modelName = args[2];
       String imageFilename = args[3];
       
       byte[] bytes = Files.readAllBytes(new File(imageFilename).toPath());
       
       URL url = new URL("https://platform.sentisight.ai/api/predict/" + projectId + "/" + modelName);
       HttpURLConnection connection = (HttpURLConnection)url.openConnection();        
       connection.setRequestProperty("Content-Type", "application/octet-stream");
       connection.setRequestProperty("X-Auth-token", token);
       connection.setRequestMethod("POST");
       connection.setDoOutput(true);
       DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
        wr.write(bytes);
        wr.flush();
        wr.close();
        
        BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
        String output;
        StringBuffer response = new StringBuffer();

        while ((output = in.readLine()) != null) {
            System.out.println(output);
            response.append(output);
        }
        in.close();
   }
}
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
    <meta charset="utf-8">
    <title>Sample</title>
    <script type="text/javascript">
        const baseApiURL = 'https://platform.sentisight.ai/api/';
        let token = '';
        let predictionId;
        let results;
        let resultOutput;

        function predict() {
            token = document.getElementById('tokenfield').value;
            const projectId = document.getElementById('project').value;
            const modelName = document.getElementById('model').value;
            const input = document.getElementById('upload');
            const resultOutput = document.getElementById('output');
            const file = input.files[0];
            const fr = new FileReader();
            fr.onload = function() {
                results = apiPostRequest('predict/' + projectId + '/' + modelName, fr.result);
                let parsedResults = JSON.parse(results);
                resultOutput.innerText = results
                console.log(parsedResults);
            }
            fr.readAsArrayBuffer(file);
        }

        function apiPostRequest(request, body) {
            const xmlHttp = new XMLHttpRequest();
            xmlHttp.open( "POST",  baseApiURL + request, false );
            xmlHttp.setRequestHeader('Content-Type', 'application/octet-stream');
            xmlHttp.setRequestHeader('X-Auth-token', token);
            xmlHttp.send(body);
            console.log(xmlHttp.responseText);
            return xmlHttp.responseText;
        }
    </script>
</head>
<body>
Token: <input id="tokenfield" type="text" name="" value="">
<br>
Project id: <input id="project" type="number" name="" value="">
<br>
Model name: <input id="model" type="text" name="" value="">
<br>
Upload image: <input id="upload" type="file" name="" value="">
<br>
<button type="button" onclick="predict()">Predict</button>
<br><br><br>
<p id="output">Your results will go here!</p>
</body>
</html>
import requests

token = "your_token"
project_id = "your_project_id"
model = "your_model_name"
image_filename = "your_image_path"

headers = {"X-Auth-token": token, "Content-Type": "application/octet-stream"}

with open(image_filename, 'rb') as handle:
    r = requests.post('https://platform.sentisight.ai/api/predict/{}/{}/'.format(project_id,model), headers=headers, data=handle)

if r.status_code == 200:
    print(r.text)
else:
    print('Error occured with REST API.')
    print('Status code: {}'.format(r.status_code))
    print('Error message: ' + r.text)
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;

namespace Sample
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args.Length < 4)
            {
                Console.WriteLine("Expected arguments: api_token project_id model_name file");
                return;
            }
            var token = args[0];
            var projectId = args[1];
            var modelName = args[2];
            var imageFilename = args[3];

            var bytes = File.ReadAllBytes(imageFilename);
            var data = new ByteArrayContent(bytes);
            data.Headers.ContentType = MediaTypeHeaderValue.Parse("application/octet-stream");
            
            var uri = new Uri($"https://platform.sentisight.ai/api/predict/{projectId}/{modelName}");
            var client = new HttpClient();
            client.DefaultRequestHeaders.Add("X-Auth-token", token);

            var response = client.PostAsync(uri, data);
            var result = response.Result.Content.ReadAsStringAsync().Result;
            Console.WriteLine(result);
        }
    }
}

Making predictions using an image URL

Using the REST API by providing an image URL is similar to the previous case of using REST API by uploading an image. The only differences are that you need to set the "Content-Type" header to "application/json" and set the body to a JSON formatted string with a "url" parameter specifying the image URL.

For more details, see the code samples below.

TOKEN="your_token"
PROJECT_ID="your_project_id"
MODEL="your_model_name"
IMAGE_URL="your_image_url"
curl --location --request POST "https://platform.sentisight.ai/api/predict/$PROJECT_ID/$MODEL" \
--header 'X-Auth-token: $TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
    "url": ""$IMAGE_URL""
}'
package sentisight.api.sample;

import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;

public class App
{
    public static void main( String[] args ) throws IOException
    {
        String token = "";
        String projectId = "";
        String modelName = "";
        String image_url = "";

        String body = "{\r\n    \"url\": \"%s\"\r\n}".formatted(image_url);

        URL url = new URL("https://platform.sentisight.ai/api/predict/" + projectId + "/" + modelName);
        HttpURLConnection connection = (HttpURLConnection)url.openConnection();
        connection.setRequestProperty("Content-Type", "application/json");
        connection.setRequestProperty("X-Auth-token", token);
        connection.setRequestMethod("POST");
        connection.setDoOutput(true);
        DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
        wr.writeBytes(body);
        wr.flush();
        wr.close();

        BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
        String output;
        StringBuilder response = new StringBuilder();

        while ((output = in.readLine()) != null) {
            response.append(output);
        }
        System.out.println(response);
        System.out.println(connection.getResponseCode());
        in.close();
    }
}
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
    <meta charset="utf-8">
    <title>Sample</title>
    <script type="text/javascript">
        const baseApiURL = 'https://platform.sentisight.ai/api/';
        let token = '';
        let results;
        let resultOutput;

        function predict() {
            token = document.getElementById('tokenfield').value;
            const projectId = document.getElementById('project').value;
            const modelName = document.getElementById('model').value;
            resultOutput = document.getElementById('output');
            const url = document.getElementById('url').value;
            const data = JSON.stringify({
                url
            });
            results = apiPostRequest('predict/' + projectId + '/' + modelName, data)
            let parsedResults = JSON.parse(results);
            resultOutput.innerText = results
            console.log(parsedResults);
        }

        function apiPostRequest(request, body) {
            const xmlHttp = new XMLHttpRequest();
            xmlHttp.open( "POST",  baseApiURL + request, false );
            xmlHttp.setRequestHeader('Content-Type', 'application/json');
            xmlHttp.setRequestHeader('X-Auth-token', token);
            xmlHttp.send(body);
            console.log(xmlHttp.responseText);
            return xmlHttp.responseText;
        }
    </script>
</head>
<body>
Token: <input id="tokenfield" type="text" name="" value="">
<br>
Project id: <input id="project" type="number" name="" value="">
<br>
Model name: <input id="model" type="text" name="" value="">
<br>
Url: <input id="url" type="text" name="" value="">
<br>
<button type="button" onclick="predict()">Predict</button>
<br><br><br>
<p id="output">Your results will go here!</p>
</body>
</html>
import requests
import json

token = "your_token"
project_id = "your_project_id"
model = "your_model_name"
image_url = "http://your-image-url.png"

payload = json.dumps({
    "url": image_url
})

headers = {"X-Auth-token": token, "Content-Type": "application/json"}

r = requests.post('https://platform.sentisight.ai/api/predict/{}/{}/'.format(project_id, model), headers=headers,
                  data=payload)

if r.status_code == 200:
    print(r.text)
else:
    print('Error occured with REST API.')
    print('Status code: {}'.format(r.status_code))
    print('Error message: ' + r.text)
using System;
using System.IO;
using System.Net.Http;
using System.Text;
using System.Text.Json;

namespace Sample
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args.Length < 4)
            {
                Console.WriteLine("Expected arguments: api_token project_id model_name url");
                return;
            }
            var token = args[0];
            var projectId = args[1];
            var modelName = args[2];
            var imageUrl = args[3];
            
            using var ms = new MemoryStream();
            using var writer = new Utf8JsonWriter(ms);
            writer.WriteStartObject();
            writer.WriteString("url", imageUrl);
            writer.WriteEndObject();
            writer.Flush();
            var json = Encoding.UTF8.GetString(ms.ToArray());
            
            var data = new StringContent(json, Encoding.Default, "application/json");

            var uri = new Uri($"https://platform.sentisight.ai/api/predict/{projectId}/{modelName}");
            var client = new HttpClient();
            client.DefaultRequestHeaders.Add("X-Auth-token", token);

            var response = client.PostAsync(uri, data);
            var result = response.Result.Content.ReadAsStringAsync().Result;
            Console.WriteLine(result);
        }
    }
}

Making predictions using a Base64 encoded image

Using the REST API by providing a Base64 encoded image is very similar to the case of using REST API with an image URL. The only difference is that you need to change the JSON parameter name "url" to "base64".

For more details, see the code samples below.

TOKEN="your_token"
PROJECT_ID="your_project_id"
MODEL="your_model_name"
IMAGE_B64=""
curl --location --request POST "https://platform.sentisight.ai/api/predict/$PROJECT_ID/$MODEL" \
--header 'X-Auth-token: $TOKEN' \
--header 'Content-Type: application/json' \
--data-raw '{
    "base64": ""$IMAGE_B64""
}'
package sentisight.api.sample;

import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;

public class App
{
    public static void main( String[] args ) throws IOException
    {
        String token = "";
        String projectId = "";
        String modelName = "";
        String image_b64 = "";

        String body = "{\r\n    \"base64\": \"%s\"\r\n}".formatted(image_b64);

        URL url = new URL("https://platform.sentisight.ai/api/predict/" + projectId + "/" + modelName);
        HttpURLConnection connection = (HttpURLConnection)url.openConnection();
        connection.setRequestProperty("Content-Type", "application/json");
        connection.setRequestProperty("X-Auth-token", token);
        connection.setRequestMethod("POST");
        connection.setDoOutput(true);
        DataOutputStream wr = new DataOutputStream(connection.getOutputStream());
        wr.writeBytes(body);
        wr.flush();
        wr.close();

        BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
        String output;
        StringBuilder response = new StringBuilder();

        while ((output = in.readLine()) != null) {
            response.append(output);
        }
        System.out.println(response);
        System.out.println(connection.getResponseCode());
        in.close();
    }
}
<!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
  <meta charset="utf-8">
  <title>Sample</title>
  <script type="text/javascript">
    const baseApiURL = 'https://platform.sentisight.ai/api/';
    let token = '';
    let results;
    let resultOutput;

    function predict() {
      token = document.getElementById('tokenfield').value;
      const projectId = document.getElementById('project').value;
      const modelName = document.getElementById('model').value;
      resultOutput = document.getElementById('output');
      const base64 = document.getElementById('base64').value;
      const data = JSON.stringify({
        base64
      });
      results = apiPostRequest('predict/' + projectId + '/' + modelName, data)
      let parsedResults = JSON.parse(results);
      resultOutput.innerText = results
      console.log(parsedResults);
    }

    function apiPostRequest(request, body) {
      const xmlHttp = new XMLHttpRequest();
      xmlHttp.open( "POST",  baseApiURL + request, false );
      xmlHttp.setRequestHeader('Content-Type', 'application/json');
      xmlHttp.setRequestHeader('X-Auth-token', token);
      xmlHttp.send(body);
      console.log(xmlHttp.responseText);
      return xmlHttp.responseText;
    }
  </script>
</head>
<body>
Token: <input id="tokenfield" type="text" name="" value="">
<br>
Project id: <input id="project" type="number" name="" value="">
<br>
Model name: <input id="model" type="text" name="" value="">
<br>
Base64: <input id="base64" type="text" name="" value="">
<br>
<button type="button" onclick="predict()">Predict</button>
<br><br><br>
<p id="output">Your results will go here!</p>
</body>
</html>
import requests
import json

token = "your_token"
project_id = "your_project_id"
model = "your_model_name"
image_b64 = ""

payload = json.dumps({
    "base64": image_b64
})

headers = {"X-Auth-token": token, "Content-Type": "application/json"}

r = requests.post('https://platform.sentisight.ai/api/predict/{}/{}/'.format(project_id, model), headers=headers,
                  data=payload)

if r.status_code == 200:
    print(r.text)
else:
    print('Error occured with REST API.')
    print('Status code: {}'.format(r.status_code))
    print('Error message: ' + r.text)
using System;
using System.IO;
using System.Net.Http;
using System.Text;
using System.Text.Json;

namespace Sample
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args.Length < 4)
            {
                Console.WriteLine("Expected arguments: api_token project_id model_name base64");
                return;
            }
            var token = args[0];
            var projectId = args[1];
            var modelName = args[2];
            var imageB64 = args[3];
            
            using var ms = new MemoryStream();
            using var writer = new Utf8JsonWriter(ms);
            writer.WriteStartObject();
            writer.WriteString("base64", imageB64);
            writer.WriteEndObject();
            writer.Flush();
            var json = Encoding.UTF8.GetString(ms.ToArray());
            
            var data = new StringContent(json, Encoding.Default, "application/json");

            var uri = new Uri($"https://platform.sentisight.ai/api/predict/{projectId}/{modelName}");
            var client = new HttpClient();
            client.DefaultRequestHeaders.Add("X-Auth-token", token);

            var response = client.PostAsync(uri, data);
            var result = response.Result.Content.ReadAsStringAsync().Result;
            Console.WriteLine(result);
        }
    }
}

SentiSight.ai Swagger specification

List of endpoints: https://app.swaggerhub.com/apis-docs/SentiSight.ai/sentisight.

API code samples: https://app.swaggerhub.com/apis/SentiSight.ai/sentisight.

You can try out our REST API interactively and convert Swagger specification to code samples in many different languages.

Using the model offline—setting up your own REST API server

For a variety of reasons you might want to use a trained model independently from SentiSight.ai platform, or indeed without internet connection at all. You can do that by setting up your own REST API server with a model you trained on SentiSight.ai.

In order to set up your own REST API server, you will have to download an offline version of the model. To do that, click on the Download model button in the "View training statistics" page.

After the model is downloaded, follow the instructions in Readme.md to set up your local REST API server. You can make the client requests from the same PC on which you set up the server, so the model would be run completely offline. On the other hand, after you set up REST API server, you can also make client requests to this server from many different devices (including mobile) on your network. Note that the REST API server must be run on a Linux system, but the client devices can run on any operating system.

The offline version of the model can be run as a free trial for 30 days. After this period, if you like the offline version, you will have to buy a license from us. Note that we have three options for the speed of the offline model: "slow", "medium" and "fast". The license price depends on the selected speed. The free trial always runs on the "fast" speed mode.


Video tutorials

Below you can find a number of video tutorials that are relevant to this topic.

Basic workflow tutorial

Basic workflow tutorial


Topics covered:
  • Training a single label classification model
  • Analyzing model's performance
  • Using the model inside the platform or via REST API
Labeling images for classification

Labeling images for classification


Topics covered:
  • Labeling images during the upload
  • Adjusting the labels after the upload
  • Changing thedefault label for the image
Selecting images for training

Selecting images for training


Topics covered:
  • The default platform's behavior - training on all images
  • Filtering images by label or by type
  • Selecting images for training manually
Training a single label classification model in detail

Training a single label classification model in detail


Topics covered:
  • Explanation of training parameters
  • The relationship of per-class and global performance statistics
  • Viewing and downloading predictions on train/validation sets
  • Analyzing the learning curves
  • Analyzing the confusion matrix
  • Using the model inside the platform or via REST API
Training a multi-label classification model in detail

Training a multi-label classification model in detail


Topics covered:
  • Uploading images from a folder
  • Uploading image classification labels
  • Including/excluding image labels from training
  • Analyzing multi-label predictions
  • Changing score thresholds for classification
  • Analyzing precision-recall curve
  • The definition of "best" and "last" model
  • Making multi-label predictions
Training a model on one class tutorial

Training a model on one class tutorial


Topics covered:
  • How to train a classification model on one class
  • How to train multiple one-class classification models in an easy way