Fine tuning
After the initial definition of a model, there will generally be an optimization phase, in which test documents not used for the definition are evaluated to check the model's performance.
Each model can be tested using the Text Classification API Test Console. There's more information on how this is done in the Build/Test a model section.
In the results, we differentiate two types of errors:
- False Positives: the category that appears in the classification should not appear, that is, one of the categories returned is incorrect.
- False Negatives: a category that should appear in the classification doesn't; in other words, a category is missing.
The following sections, resolve false negatives and resolve false positives, will explain the different ways to deal with these errors.
These are some general guidelines to take into account in the optimization process:
- The terms that appear associated to the classification are all the terms that affect it, mandatory, relevant and irrelevant (not excluding, as the category does not appear in the results), as well as those that come from the statistical classification.
- The classification obtained applies the relevance thresholds defined in the settings of the model. This means that in order to know if the reason why a category doesn't appear in the results has to do with the relevance value or the text just isn't classified in it, you would have to set the relevance values to zero and retest the classification.
- The test console has a
debug
parameter that lets you check the rules triggered in a classification. This, together with the possibility of deactivating a rule using "###" is extremely useful when it comes to testing/debugging the behavior of a model in a specific case.