False positives can be resolved through rules, training texts or by modifying the list of stopwords:
There are three possible actions we can carry out:
This option can be applied to hybrid models and to rule-based ones.
The way to correct a false positive using training texts is simply to eliminate from the category the texts similar to the one that gives the false positive. This solution is not very frequent, as it is more complicated than editing the rules.
This option can be applied to statistical models and to rule-based ones.
If it's detected that a term is irrelevant for the whole model and that it adds noise to some categories, a good option would be to add said term to the list of stopwords so it's not taken into account in the classification.
This solution is not very frequent, but it is a good one in the initial phase of a model's optimization, as it's when it is easier to identify the terminology used in the domain, and thus to identify the terms that are common within the domain but do not help in the classification.
It's important to remember than modifying any category may change the relevance values the model assigns to the rest of the categories.