2.90:Output Extractor Key

From Grooper Wiki
Revision as of 13:34, 28 December 2023 by Dgreenwood (talk | contribs) (Dgreenwood moved page Output Extractor Key - 2.9 to 2.90:Output Extractor Key without leaving a redirect)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This article is about an older version of Grooper.

Information may be out of date and UI elements may have changed.

20252.90

Also known as "feature tagging" or "data tagging", this is another weapon in the arsenal of powerful Grooper classification techniques.

A Content Model and accompanying Batch for what will be built can be found here. It is not required to download to understand this article, but can be helpful because it can be used to follow along with the content of this article. This file was exported from and meant for use in Grooper 2.9

About

Output Extractor Key is a property on a the Data Type extractor. It is exposed when the Collation property is set to Individual. When the Output Extractor Key is set to True, each output value will be set to a key representing the name of the extractor which produced the match. It is useful when extracting non-word classification features.

The main purpose of this property is to supplement the capabilities of Grooper's classification technology. When using Lexical classification, a Content Model must use an extractor to collect the lexical features upon training. A common use case is to have the extractor collect words, which is beneficial when the semantic content of a document is varied among examples, and indicative of their type. However, this breaks down when a document consists mainly of repeated types of information. Take, for example, a bank statement. With no keywords present on the document, the only way to properly classify the document is to recognize that it contains a high frequency of transaction line items. It would be highly impractical to train Grooper to understand every variation of a transaction line item.

This is where the Output Extractor Key property comes into play. In using this property one can establish an extractor that will pattern match the various transaction line item formats on the document, and return A SINGLE output for each result, such as "feature_transaction", instead of the myriad returned results from the pattern match. This is then fed to the classification engine. With this approach a document containing a high frequency of "transaction" features, let's say ... 50, will be treated as though it contained 50 separate occurrences of the phrase "feature_transaction".


How To

Following is an example of how to configure a Data Type to use the Output Extractor Key property, then configure the Content Model to leverage it for the purposes of classification. In this example are a few different document formats, but all are Mineral Ownership Reports. In spite of their different formats, because they have similar content, the use of the aforementioned extractor will make their classification quite simple.

! Some of the tabs in this tutorial are longer than the others. Please scroll to the bottom of each step's tab before going to the step.

Understanding the Content Model

The purpose of this Content Model is to classify the one Document Type it contains. Its Classification Method property is set to Lexical and it is referencing...

  1. ...this Data Type, which is configured to find words.
    • This is often a configuration used to attempt to classify documents. In many cases, this works very well, but for these types of documents it will do a poor job.
  2. These three Data Types are configured to find a selected set of data that is highly prominent on these documents, and will be used to supplement the main lexical extractor.

Configuring the 'Feature Tagging' Extractor

  1. Add, as a child object to the CLAS - Lexical Classifier Data Type, another Data Type and name it CLAS - Lexical Classifier - Output Extractor Key.
  2. Configure the Referenced Extractors property to point at the 3 Data Types mentioned previously: Address, Section, and TownshipRange.
  3. Save and test, and notice the Results list view. Many results are being returned, and you can see from which extractor they're being returned from.
    • It's important to note how these results are being returned at the moment. While the Data Type is getting the information we want, we can't use these results for training purposes because they're so varied. In their current state, they're worthless, but we can transform the information to work for us!
  1. Expand the Collation property, and set the Output Extractor Key property to True. Save and test again.
  2. Notice how the results are now different. This is the really critical part. Instead of all the varied results returned from any one extractor, they're now unified and returned as one result: feature_{name of data type}.
    • The thing to understand about why this is so important is the effects it will have on the weightings created during training. All the results from, say, the Section Data Type will now be returned as feature_section, instead of all the varied results they were before, like Section 19: SE/4, except for two acres described in Book 883, Page 229 and and Section 9: N2, N2S2, S2SW, SWSE etc. This will cause a high friquency of that specific feature, and as a result, will give great significance in the weightings.
  1. Select the parent Data Type named CLAS - Lexical Classifier.
    • The results from the parent are now the combination of it and its children.
  2. The Deduplicate By property being set to Area lets the longer results from the child extractor supercede the individual short words.
  3. This is a good example of a real-world setup and how you'd want to leave it moving forward, but for sake of simplicity and to illustrate the training results more clearly, go ahead and clear the Referenced Extractors property.

Back to top to continue to next tab

Training and Classifying the Batch

  1. Select the Content Model.
  2. Click the Classification Testing tab.
  3. Select a document and click the Train Document button.
  1. In the Train Document Type window, select the Output Extractor Key Document Type and click the OK button.
    • This will close the Train Document Type window and a process execution window will display and go away as soon as the command is processed.
  1. Notice now the document has a new document icon and is named after the Document Type from the Content Model (i.e. - it is classified).
    • This one document will be classified, the others will sill display with folder icons and won't be classified.
  2. Click the Classify Batch button.
  3. In the Classification Tester Settings window, be sure the Folder Level and Classification Level properties are properly set, and click the Execute button.
    • This will close the Classification Tester Settings window and a process execution window will display and go away as soon as the command is processed.
  1. With one trained example, notice now all the documents are classified.
    • The features that the CLAS - Lexical Classifier (and it's children) collected are highly prominant on these documents, therefore they have a high feature count and easily classify this type of document.
  1. Now, select the Output Extractor Key Document Type.
  2. Click the Weightings tab.
  3. Notice the features collected. Just these three, but they have a very high count, and are weighted extremely high as a result.


If you would like a completed version of content linked above, and walked through in this article to compare against yours you can download it here. This file was exported from and meant for use in Grooper 2.9