Results & Testing

When the training process is over, you can explore the results.

trained_models_menu
Open the Trained models menu. Near the newly trained model you will have two options: View training statistics to analyze the model's performance and Make a new prediction to try out your model.
multi_label_statistics_train

In the model's overview you will find statistical measures calculated separately for training and validation sets. In each tab, the statistics is further divided into two types: global statistics that is averaged over all classes and per-class statistics. You can find the definition of each statistics measure by clicking on it.

multi_label_statistics_advanced_1

Enabling Advanced view will display additional information about your model:

  • Supplementary or changed statistical tables
  • Learning curves
  • A confusion matrix for single-label models
  • Score thresholds and a Precision-Recall plot for multi-label classification and object detection models

Multi-label classification and object detection models have their score thresholds, which determine whether a prediction is positive or not. Our algorithm optimized these score thresholds for the best performance (measured by F1 statistic). However, you can manually change these values to suit your needs. Simply uncheck Use optimized thresholds and either enter your values manually or click on the Precision-Recall graph below. When you are done setting the thresholds, click the Save thresholds button which appears whenever any of the threshold values have changed.

image_predictions

You can check predictions on your train or validation set by selecting Show predictions in the corresponding tab. Next to each image, you will see the model's prediction on it, together with the set thresholds if its a multi-label classification or an object detection model. To use the custom thresholds you have set, uncheck Use optimized thresholds. To see predictions that have not met the thresholds in multi-label classification, check Advanced view.

Learn more about the process

  • How to improve my model’s performance?

    If you are not satisfied with the accuracy of your model, you can experiment adding more training images and increase the training time. If this doesn't help, you might also want to try modifying the other training parameters in the Advanced view of the training window. We recommend exploring statistics to draw better assumptions on how to improve the model. To see more performance metrics, choose Advanced view in the statistics window and see the learning curves, confusion matrices and much more statistical indicators that may help you draw conclusions on how to make the model more effective.

    You can also request a Custom project and our experts will come to help. They will manage the process to meet your requirements, whether another algorithm or specific additional data is needed.

  • Training accuracy, validation accuracy – what’s this?!

    Machine learning is a complicated subject and there is much to learn. If you have trouble understanding "Training" and "Validation" concepts, have a look at this analogy:

    Suppose your teacher gave a lot of paintings for you to analyze and learn to recognize the style (Gothic, baroque, rococo, classical, etc). Your task is to find the differences and get a pattern of how these styles look like. It will not be an easy task at first, but after some time you will start to generalize what features are common for a particular class. In most cases, you will learn to guess all of the pictures you've seen correctly. This is analogous to a 100% train accuracy.

    Now as you have advanced, you were given another set of paintings, which you previously haven't seen. This will validate whether you learned something meaningful, or missed the point and simply learned everything by heart.

    Finally, testing the model would be analogous to a person who learned everything well enough and uses this skill in life. Note that a person can continue learning if he sees a wider variety of pictures later. The same can be done with the algorithm.

    1. Sometimes an algorithm finds it easier to learn things by heart than develop general patterns.
    2. Bad validation results may indicate that you have trained your model on insufficient variety of data. Then you should try to collect more data.

    Getting a high training accuracy is a good sign, but not a final indicator of the general performance of the model. On the other hand, if your training accuracy is low, then you don't have a good model. Sometimes this happens because of the data being too specific or ambiguous, or the model having an unsuitable structure.

    Important: to perform quality testing, you must be sure to collect a sufficient number of testing images. A small testing data set might not be representative. Let's say if you have a small testing data set with some unusual images (such as a very uncommon side of the object, high occlusion, etc), the testing accuracy will be much lower than expected.