Output Extractor Key (Property): Difference between revisions

From Grooper Wiki
No edit summary
 
(21 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<blockquote style="font-size:14pt">Also known as "data tagging", this is another weapon in the arsenal of powerful '''Grooper''' classification techniques.</blockquote>
{{Migrated}}
 
{{2023:{{PAGENAME}}}}
A '''[[Content Model]]''' and accompanying '''[[Batch]]''' for what will be built can be found '''[[Media:Output_Extractor_Key.zip|here]]'''. It is not required to download to understand this article, but can be helpful because it can be used to follow along with the content of this article. ''This file was exported from and meant for use in Grooper 2.9''
 
==About==
'''''Output Extractor Key''''' is a property on a the '''[[Data Type]]''' extractor. It is exposed when the '''''[[Collation]]''''' property is set to ''Individual''. When the '''''Output Extractor Key''''' is a property is set to ''True'', each output value will be set to a key representing the name of the extractor which produced the match. It  is useful when extracting non-word classification features.
<br/><br/>
The main purpose of this property is to supplement the capabilities of '''Grooper's''' classification technology. When using ''Lexical'' classification, a '''Content Model''' must use an extractor to collect the lexical features upon training. A common use case is to have the extractor collect words, which is beneficial when the semantic content of a document is varied among examples, and indicative of their type. However, this breaks down when a document consists mainly of repeated types of information. Take, for example, a bank statement. With no keywords present on the document, the only way to properly classify the document is to recognize that it contains a high frequency of transaction line items. It would be highly impractical to train '''Grooper''' to understand every variation of a transaction line item.
<br/><br/>
This is where the '''''Output Extractor Key''''' property comes into play. In using this property one can establish an extractor that will pattern match the various transaction line item formats on the document, and return A SINGLE output for each result, such as "feature_transaction", instead of the myriad returned results from the pattern match. This is then fed to the classification engine. With this approach a document containing a high frequency of "transaction" features, let's say ... 50, will be treated as though it contained 50 separate occurrences of the phrase "feature_transaction".
 
==How To==
{|
| style="padding:25px; vertical-align:top" |
Following is an example of how to configure a '''Data Type''' to use the '''''Output Extractor Key''''' property, then configure the '''Content Model''' to leverage it for the purposes of classification. In this example are a few different document formats, but all are Mineral Ownership Reports. In spite of their different formats, because they have similar content, the use of the aforementioned extractor will make their classification quite simple.
|| [[File:output_extractor_key_001.gif]]
|}
 
{|cellpadding="10" cellspacing="5"
|-style="background-color:#f89420; color:white"
|style="font-size:14pt"|'''!'''||Some of the tabs in this tutorial are longer than the others.  Please scroll to the bottom of each step's tab before going to the step.
|}
 
<tabs style="margin:20px">
<tab name="Understanding the Content Model" style="margin:25px">
====Understanding the Content Model====
{|
| style="padding:25px; vertical-align:top" |
The purpose of this '''Content Model''' is to classify the one '''[[Document Type]]''' it contains. Its '''Classification Method''' property is set to '''''Lexical''''' and it is referencing...
# ...this '''Data Type''', which is configured to find words.
#* This is often a configuration used to attempt to classify documents. In many cases, this works very well, but for these types of documents it will do a poor job.
# These three '''Data Types''' are configured to find a selected set of data that is highly prominent on these documents, and will be used to supplement the main lexical extractor.
|| [[File:output_extractor_key_002.png|1000px]]
|}
</tab>
<tab name="Configuring the 'Data Tagging' Extractor" style="margin:25px">
====Configuring the 'Data Tagging' Extractor====
{| class="wikitable"
| style="padding:25px; vertical-align:center" |
# Add, as a child object to the '''CLAS - Lexical Classifier''' '''Data Type''', another '''Data Type''' and name it '''CLAS - Lexical Classifier - Output Extractor Key'''.
# Configure the '''''Referenced Extractors''''' property to point at the 3 '''Data Types''' mentioned previously: '''Address''', '''Section''', and '''TownshipRange'''.
# Save and test, and notice the '''Results''' list view. Many results are being returned, and you can see from which extractor they're being returned from.
#* It's important to note how these results are being returned at the moment. While the '''Data Type''' is getting the information we want, we can't use these results for training purposes because they're so varied. In their current state, they're worthless, but we can transform the information to work for us!
|| [[File:output_extractor_key_003.png|1000px]]
|-
| style="padding:25px; vertical-align:center" |
# <li value="4"> Expand the '''''Collation''''' property, and set the '''''Output Extractor Key''''' property to ''True''. Save and test again.</li>
# Notice how the results are now different. This is the really critical part. Instead of all the varied results returned from any one extractor, they're now unified and returned as one result: ''<span style="color:#0001fd">feature_<code><span style="color:#ff00ff">{name of data type}</span></code></span>''.
#* The thing to understand about why this is so important is the effects it will have on the weightings created during training. All the results from, say, the '''Section''' '''Data Type''' will now be returned as ''<span style="color:#0001fd">feature_section</span>'', instead of all the varied results they were before, like ''<span style="color:#0001fd">Section 19: SE/4, except for two acres described in Book 883, Page 229 and</span>'' and ''<span style="color:#0001fd">Section 9: N2, N2S2, S2SW, SWSE</span>'' etc. This will cause a high friquency of that specific feature, and as a result, will give great significance in the weightings.
|| [[File:output_extractor_key_003b.png|1000px]]
|-
| style="padding:25px; vertical-align:center" |
# <li value="6"> Select the parent '''Data Type''' named '''CLAS - Lexical Classifier'''.</li>
#* The results from the parent are now the combination of it and its children.
# The '''''Deduplicate By''''' property being set to ''Area'' lets the longer results from the child extractor supercede the individual short words.
# This is a good example of a real-world setup and how you'd want to leave it moving forward, but for sake of simplicity and to illustrate the training results more clearly, go ahead and clear the '''''Referenced Extractors''''' property.
|| [[File:output_extractor_key_003c.png|1000px]]
|}
</tab>
<tab name="Training and Classifying the Batch" style="margin:25px">
====Training and Classifying the Batch====
{| class="wikitable"
| style="padding:25px; vertical-align:center" |
# Select the '''Content Model'''.
# Click the '''Classification Testing''' tab.
# Select a document and click the '''Train Document''' button.
|| [[File:output_extractor_key_004a.png|1000px]]
|-
| style="padding:25px; vertical-align:center" |
# <li value="4"> In the '''Train Document Type''' window, select the '''Output Extractor Key''' '''Document Type''' and click the ''OK'' button.</li>
#* This will close the '''Train Document Type''' window and a process execution window will display and go away as soon as the command is processed.
|| [[File:output_extractor_key_004b.png|1000px]]
|-
| style="padding:25px; vertical-align:center" |
# <li value="5"> Notice now the document has a new ''document'' icon and is named after the '''Document Type''' from the '''Content Model''' (i.e. - it is classified).</li>
#* This one document will be classified, the others will sill display with folder icons and won't be classified.
# Click the '''Classify Batch''' button.
# In the '''Classification Tester Settings''' window, be sure the '''''Folder Level''''' and '''''Classification Level''''' properties are properly set, and click the '''Execute'''.
#* This will close the '''Classification Tester Settings''' window and a process execution window will display and go away as soon as the command is processed.
|| [[File:output_extractor_key_004c.png|1000px]]
|-
| style="padding:25px; vertical-align:center" |
# <li value="8"> With one trained example, notice now all the documents are classified.</li>
#* The features that the '''CLAS - Lexical Classifier''' (and it's children) collected are highly prominant on these documents, therefore they have a high feature count and easily classify this type of document.
|| [[File:output_extractor_key_004d.png|1000px]]
|-
| style="padding:25px; vertical-align:center" |
# <li value="9"> Now, select the '''Output Extractor Key''' '''Document Type'''.</li>
# Click the '''Weightings''' tab.
# Notice the features collected. Just these three, but they have a very high count, and are weighted extremely high as a result.
|| [[File:output_extractor_key_004e.png|1000px]]
|}
</tab>
</tabs>
<br/>
If you would like a completed version of content linked above, and walked through in this article to compare against yours you can download it [[Media:Output_Extractor_Key_complete.zip|here]].

Latest revision as of 11:27, 28 August 2024

This article was migrated from an older version and has not been updated for the current version of Grooper.

This tag will be removed upon article review and update.

This article is about the current version of Grooper.

Note that some content may still need to be updated.

2025 20232.90

The Output Extractor Key property is another weapon in the arsenal of powerful Grooper classification techniques. It allows pin Data Types to return results normalized in a way more beneficial to document classification.

You may download and import the file below into your own Grooper environment (version 2023) to follow along with this tutorial. This contains a Batch with the example document(s) discussed in this tutorial and a Project to configure.

About

Output Extractor Key is a property on a the Data Type extractor. It is exposed when the Collation property is set to Individual. When the Output Extractor Key is set to True, each output value will be set to a key representing the name of the extractor which produced the match. It is useful when extracting non-word classification features.

The main purpose of this property is to supplement the capabilities of Grooper's classification technology. When using Lexical classification, a Content Model must use an extractor to collect the lexical features upon training. A common use case is to have the extractor collect words, which is beneficial when the semantic content of a document is varied among examples, and indicative of their type. However, this breaks down when a document consists mainly of repeated types of information. Take, for example, a bank statement. With no keywords present on the document, the only way to properly classify the document is to recognize that it contains a high frequency of transaction line items. It would be highly impractical to train Grooper to understand every variation of a transaction line item.

This is where the Output Extractor Key property comes into play. In using this property one can establish an extractor that will pattern match the various transaction line item formats on the document, and return A SINGLE output for each result, such as "feature_transaction", instead of the myriad returned results from the pattern match. This is then fed to the classification engine. With this approach a document containing a high frequency of "transaction" features, let's say ... 50, will be treated as though it contained 50 separate occurrences of the phrase "feature_transaction".


How To

Following is an example of how to configure a Data Type to use the Output Extractor Key property, then configure the Content Model to leverage it for the purposes of classification. In this example are a few different document formats, but all are Mineral Ownership Reports. In spite of their different formats, because they have similar content, the use of the aforementioned extractor will make their classification quite simple.

Some of the tabs in this tutorial are longer than the others. Please scroll to the bottom of each step's tab before going to the step.

Understanding the Content Model

The purpose of this Content Model is to classify the one Document Type it contains. Its Classification Method property is set to Lexical and it is referencing...

  1. ...this Data Type, which is configured to find words.
    • This is often a configuration used to attempt to classify documents. In many cases, this works very well, but for these types of documents it will do a poor job.
  2. These three Data Types are configured to find a selected set of data that is highly prominent on these documents, and will be used to supplement the main lexical extractor.

Configuring the 'Feature Tagging' Extractor

  1. Add, as a child object to the CLAS - Unigrams Data Type, another Data Type and name it Features.
  2. Configure the Referenced Extractors property to point at the 3 Data Types mentioned previously: Address, Section, and TownshipRange.
  3. Save and test, and notice the Results list view. Many results are being returned, and you can see from which extractor they're being returned from.
    • It's important to note how these results are being returned at the moment. While the Data Type is getting the information we want, we can't use these results for training purposes because they're so varied. In their current state, they're worthless, but we can transform the information to work for us!

  1. Expand the Collation property, and set the Output Extractor Key property to True. Save and test again.
  2. Notice how the results are now different. This is the really critical part. Instead of all the varied results returned from any one extractor, they're now unified and returned as one result: feature_{name of data type}.
    • The thing to understand about why this is so important is the effects it will have on the weightings created during training. All the results from, say, the Section Data Type will now be returned as feature_section, instead of all the varied results they were before, like Section 19: SE/4, except for two acres described in Book 883, Page 229 and and Section 9: N2, N2S2, S2SW, SWSE etc. This will cause a high friquency of that specific feature, and as a result, will give great significance in the weightings.

  1. Select the parent Data Type named CLAS - Unigrams.
    • The results from the parent are now the combination of it and its children.
  2. The Deduplicate By property being set to Area lets the longer results from the child extractor supercede the individual short words.

Back to top to continue to next tab

Training and Classifying the Batch

  1. Select the Classify Step of the Batch Process.
  2. Make sure the correct Content Model is selected for the Content Model Scope.
  3. Click on the "Classification Tester" tab.

  1. Right click the document you want to train and then hover over the "Classification" option.
  2. Click "Train As...".
  1. When the "Train As" window pops up, use the drop down to select which Document Type you want to train the document as, and click "Execute".

  1. After training the document, select all of the documents, right click, and hover over the "Classification" option again.
  2. Click "Classify...".
  1. When the "Classify" window pops up, select the correct Content Model from the Content Type drop down box and click "Execute".

  1. All of the documents should now be appropriately classified after only training one document.
    • The features that the CLAS - Unigrams (and it's children) collected are highly prominant on these documents, therefore they have a high feature count and easily classify this type of document.
  1. Click on the "MOR" Document Type.
  2. If you click on the weightings tab, you can see the features that were collected and how they are weighted when classifying.