Training a multi-label classification model in detail - SentiSight.ai

Training a multi-label classification model in detail

Topics covered:
  • Uploading images from a folder
  • Uploading image classification labels
  • Including/excluding image labels from training
  • Analyzing multi-label predictions
  • Changing score thresholds for classification
  • Analyzing precision-recall curve
  • The definition of "best" and "last" model
  • Making multi-label predictions

The images used in this tutorial can be downloaded here

You can download video tutorial here

Thanks to our partners at conchology.be for kindly providing the images for this tutorial.


Training a multi label classification model in detail video tutorial

  • Uploading and labeling your images

    First, you will need to upload your images. To do this, you have three options. You can either upload the images individually, upload a folder of images, or upload a .zip file.

    Optionally, you can label images during the upload process, however this can be skipped. Once you have uploaded the image, you can label them either using the tools on the left hand menu, or by uploading labels created previously using the ‘Upload Labels’ button. The file format for uploading the image labels is quite simple: you have the image file name in the first column and image labels in the rest of the columns.

  • Training your model

    Once you have uploaded and labeled the images, you can start training the models. To do this, click on ‘Train model’ and then choose multi-label classification. From the basic view, you can choose the model name, as well as setting the train time, in minutes. The Label count shows the labels that will be included to train the model. You can select or unselect the tick boxes next to the labels if you do not want to include them
  • Advanced training parameters

    If you are an advanced user, click on the advanced view to see extra parameters that you can set, including;
    • Use user-defined validation set
    • Validation set size
    • Learning rate
    • Batch size (Note: this parameter is now removed)
    • Positive prediction weight (Note: this parameter is now removed)
    From the advanced view, you can also see the estimated training steps, estimated time to calculate ‘bottleneck’ features and user-defined validation set images
  • The training process

    Once you are happy with the training parameters, set your model to start training by clicking ‘Start’. You can track the progress of the model training at the top of your screen.
  • Analyzing the model’s performance

    After the model has been trained, you can view the model’s performance by clicking on ‘View training statistics’ from the “Trained models” menu. This table will have two sections, ‘Train’ and ‘Validation’. In the Train section, you will see;
    • The label count
    • Global statistics of Accuracy, Precision, Recall and F1
  • Show Predictions

    You can also click ‘Show predictions’ to see the actual predictions for specific images, for either the train or validation set.
  • Understanding the score threshold for classification

    For multi-label classification, as opposed to single-label classification, there is a minimum score threshold for predictions. If the prediction for the classification is above this minimum threshold, it is considered a positive prediction. If the prediction is below the threshold, it is considered a negative prediction.

    You can sort your prediction images by either all, correct, incorrect, above threshold or below threshold.

    A prediction is counted as incorrect if even one prediction is incorrect compared to the ground truth label, or if there is at least one prediction that does not match the label at all. In the basic view, you can only see the predictions that are above the threshold. In the advanced view, you can see all of the predictions, including those below the threshold.

    If the predicted label is among the ground truth labels, the prediction is highlighted in green. If the predicted label is not amongst the ground truth labels, the prediction is highlighted in red.

    When the predicted label is above the score threshold and it can be found among the ground truth labels, the prediction is considered to be correct. When the predicted label is below the score threshold and it cannot be found among the ground truth labels, the prediction is also considered to be correct.

    Essentially, a ground truth label with a predicted score above the threshold and a non-ground truth label with a predicted score beneath the threshold are considered correct.

    Remember that we use green color for ground truth labels and red color for non-ground truth labels, so all green predictions above threshold and all red predictions below threshold are correct. All green predictions below the threshold, and all red predictions above the threshold are considered incorrect.

    The score threshold is calculated to maximise the model’s performance (F1 statistic) on the train set. If you are not happy with the threshold, you can set the threshold yourself in the advanced view.

  • Analyzing precision-recall curve

    In the advanced view, you can see the precision-recall curve for your model. By default, this curve is showing statistics for all classes, however you can change this to show the precision-recall curve for specific labels.

    On this curve, the intersection between the dashed red line and the precision-recall curve represents the response to the optimised score threshold. If you would like to change this threshold, uncheck the ‘Use optimised thresholds’ feature, and then you can enter your own threshold score in the box, or by clicking anywhere on the precision-recall curve. Once you have set the new threshold, the performance statistics will update. When setting your own score threshold, you want to choose as high a precision and recall as possible.

    Please, not that the score thresholds change simultaneously both for train and validation sets. The new score thresholds are also represented by changed vertical dashes in the “View predictions” window.

    If you want to set a uniform threshold for all classes, this can be achieved using the same method after you have chosen ‘All classes’ from the drop down menu. You will be presented with a yes / no box checking you want to implement this uniform classification. NOTE: this has been changed to “Set all” button.

  • Choosing between the ‘best’ and the ‘last’ model

    Users are able to choose whether to use the ‘best model’ or to use the ‘last model’. We usually recommend using the ‘best model’ but if your dataset is small and lacks diversity, you might consider using the last model instead. This is because models are at risk of being ‘overtrained’ which means that the cross entropy loss starts to increase as you keep training.
  • Make new predictions

    Finally, you can make new predictions using your model, by clicking on the ‘Make new prediction’ button. When you have uploaded your images, you can again choose whether to use the best model, and whether to use custom thresholds.
  • Downloading the results

    You can download the results either as images grouped by predicted label (as a .zip), in JSON format, or in CSV format.