2023:Confidence Multiplier and Output Confidence (Property)

From Grooper Wiki

This article is about an older version of Grooper.

Information may be out of date and UI elements may have changed.

202520232.90
Graphic depicting the notion of Weighted Rules.

Some results carry more weight than others. The Confidence Multiplier and Output Confidence properties allow you to manually adjust an extraction result's confidence.

You may download the ZIP(s) below and upload it into your own Grooper environment (version 2023). The first contains a Project with resources used in examples throughout this article. The second contains one or more Batches of sample documents.

About

The Confidence Multiplier and Output Confidence properties of Data Type and Data Format extractors allow you to manually alter the confidence score of returned values.

Use of these properties is sometimes referred to as weighted rules. Its practical application allows a user to increase or decrease the confidence score of an extractor's result (or set its confidence to an assigned value). This changes the confidence of the extractor's results, making them appear more (or less) favorable. When used in combination with the Order By property set to Confidence on a parent Data Type, you can manipulate which child extractor's result the parent prioritizes.

General Usage - Confidence Multiplier

Modifying the Confidence Multiplier property of a Data Type or Data Format is done by clicking on the ellipses in the Result Options property which opens the Result Options submenu.

The Confidence Multiplier property defaults to 1 and can be changed in this submenu. The field is a double and takes floating point values.

For example, a value of 0.5 will multiply the confidence of output results by 0.5. If the output confidence was 100%, now it will be 50%. Similarly, you can increase the confidence, even above 100%. If the Confidence Multiplier property is set to 3, and an output result had a 50% confidence, it will not display as 150% confidence.

General Usage - Output Confidence

Modifying the Output Confidence property of a Data Type or Data Format is also done by clicking on the ellipses in the Result Options property which opens the Result Options submenu.

The Output Confidence property defaults to 0% and can be changed in this submenu. The default of 0% will not alter the results confidence scores. Changing this number will override whatever the result's original confidence is and replace it with this value.

For example, a value of 75% will change the confidence of output results to 75%. If the output confidence was 100%, now it will be 75%. If the output confidence was 50%, now it will be 75%. If it was 75%, it will now be (you guessed it) 75%. It doesn't matter what the original confidence was, it will be transformed to the Output Confidence value.

Waterfall Classification

Setting the Classification Method property on a Content Model to Lexical or Rules-Based, one can set up Positive Extractors on Document Types. If this extractor returns a result above the Minimum Similarity set on the Content Model, the document will be assigned that Document Type during classification. By default a result from an extractor is returned at 100% confidence (unless Fuzzy RegEx is leveraged to return a result, in which case the confidence will be affected by the fuzzy algorithm.) Given this fact positive extractors are almost certain to be above the Minimum Similarity.

This extractor could be a "Waterfall Extractor", taking advantage of the Waterfall Extraction technique. However, for classification, the system is just looking for some result to be returned above the Minimum Similarity confidence threshold.

In the Waterfall Classification method, the Minimum Confidence property can be set in the Result Filter property window of a Data Type which will eliminate any results less than that confidence. This may eliminate the results of some referenced extractors which technically matched, but at a low percent.

If we happen to know that those lower confidence hits are valid and should count for classifying the document, then the Confidence Multipliers on those referenced Data Types can be set to a higher value in order to make them hit the Minimum Confidence required.

Similarly, if higher confidence hits are inappropriately classifying documents and shouldn't be returned, the Confidence Multiplier property can be reduced so that those Data Types only exceed the Minimum Confidence when they are very high confidence.

Example

In the example below, we are going to use the Project and Batch that accompany the Document Classification 2023 course on Grooper University.

  1. Right now we're looking at how documents are currently being classified by working in a Classify Batch Process Step.
  2. We see that this Title Opinion is being misclassified as a Generic Letter.
  3. Notice that the document has a similarity score of 100% for the Generic Letter Document Type and a 68% score for the Title Opinion Document Type.
  1. If we go to the Content Model...
  2. We can see that our Minimum Similarity property is set to 55%.
    • Both the Generic Letter and the Title Opinion Document Types came in at above the Minimim Similarity percentage, but the Generic Letter won out at a higher percentage.
  1. Let's look at the Generic Letter Document Type.
  2. The Positive Extractor is set to a reference.
  1. Let's look at the extractor that is being referenced.
  2. We're going to scroll down to the "OUTPUT" section in the Data type "Properties" tab, and click the ellipsis button next to Result Options.
  1. When the "Result Options" window pops up, we see that by default the Confidence Override is set to 0%.
  2. If we set this property to anything other than 0%, when a document is classified, whatever Document Type is using this extractor will have a similarity score no higher than that number.
  1. We're going to set the Confidence Override to 60%.
  2. Click "OK" to apply the new settings.
  1. With our settings updated, let's go back to the Classify Batch Process Step.
  2. On the "Classification Tester" tab we have reclassified the documents.
  3. Notice that the Title Opinion document is now being classified appropriately.
  4. The Title Opinion Document Type is still coming in at 68%. However, the Generic Letter Document Type is returning with a 60% similarity score due to the Confidence Override property we set.
  1. This Generic Letter is still being classified as a Generic Letter Document Type.
  2. We see that although the Generic Document Type has a 60% similarity score, it is still higher than the Minimum Similiarity score of 55% and it is also higher than any other Document Type