2023:Output Extractor Key (Property)

From Grooper Wiki
Revision as of 10:15, 27 August 2024 by Randallkinard (talk | contribs)

This article is about an older version of Grooper.

Information may be out of date and UI elements may have changed.

202520232.90

The Output Extractor Key property is another weapon in the arsenal of powerful Grooper classification techniques. It allows pin Data Types to return results normalized in a way more beneficial to document classification.

You may download and import the file below into your own Grooper environment (version 2023) to follow along with this tutorial. This contains a Batch with the example document(s) discussed in this tutorial and a Project to configure.

About

Output Extractor Key is a property on a the Data Type extractor. It is exposed when the Collation property is set to Individual. When the Output Extractor Key is set to True, each output value will be set to a key representing the name of the extractor which produced the match. It is useful when extracting non-word classification features.

The main purpose of this property is to supplement the capabilities of Grooper's classification technology. When using Lexical classification, a Content Model must use an extractor to collect the lexical features upon training. A common use case is to have the extractor collect words, which is beneficial when the semantic content of a document is varied among examples, and indicative of their type. However, this breaks down when a document consists mainly of repeated types of information. Take, for example, a bank statement. With no keywords present on the document, the only way to properly classify the document is to recognize that it contains a high frequency of transaction line items. It would be highly impractical to train Grooper to understand every variation of a transaction line item.

This is where the Output Extractor Key property comes into play. In using this property one can establish an extractor that will pattern match the various transaction line item formats on the document, and return A SINGLE output for each result, such as "feature_transaction", instead of the myriad returned results from the pattern match. This is then fed to the classification engine. With this approach a document containing a high frequency of "transaction" features, let's say ... 50, will be treated as though it contained 50 separate occurrences of the phrase "feature_transaction".


How To

Following is an example of how to configure a Data Type to use the Output Extractor Key property, then configure the Content Model to leverage it for the purposes of classification. In this example are a few different document formats, but all are Mineral Ownership Reports. In spite of their different formats, because they have similar content, the use of the aforementioned extractor will make their classification quite simple.

Some of the tabs in this tutorial are longer than the others. Please scroll to the bottom of each step's tab before going to the step.

Understanding the Content Model

The purpose of this Content Model is to classify the one Document Type it contains. Its Classification Method property is set to Lexical and it is referencing...

  1. ...this Data Type, which is configured to find words.
    • This is often a configuration used to attempt to classify documents. In many cases, this works very well, but for these types of documents it will do a poor job.
  2. These three Data Types are configured to find a selected set of data that is highly prominent on these documents, and will be used to supplement the main lexical extractor.

Configuring the 'Feature Tagging' Extractor

  1. Add, as a child object to the CLAS - Unigrams Data Type, another Data Type and name it Features.
  2. Configure the Referenced Extractors property to point at the 3 Data Types mentioned previously: Address, Section, and TownshipRange.
  3. Save and test, and notice the Results list view. Many results are being returned, and you can see from which extractor they're being returned from.
    • It's important to note how these results are being returned at the moment. While the Data Type is getting the information we want, we can't use these results for training purposes because they're so varied. In their current state, they're worthless, but we can transform the information to work for us!

  1. Expand the Collation property, and set the Output Extractor Key property to True. Save and test again.
  2. Notice how the results are now different. This is the really critical part. Instead of all the varied results returned from any one extractor, they're now unified and returned as one result: feature_{name of data type}.
    • The thing to understand about why this is so important is the effects it will have on the weightings created during training. All the results from, say, the Section Data Type will now be returned as feature_section, instead of all the varied results they were before, like Section 19: SE/4, except for two acres described in Book 883, Page 229 and and Section 9: N2, N2S2, S2SW, SWSE etc. This will cause a high friquency of that specific feature, and as a result, will give great significance in the weightings.

  1. Select the parent Data Type named CLAS - Unigrams.
    • The results from the parent are now the combination of it and its children.
  2. The Deduplicate By property being set to Area lets the longer results from the child extractor supercede the individual short words.

Back to top to continue to next tab

Training and Classifying the Batch

  1. Select the Classify Step of the Batch Process.
  2. Make sure the correct Content Model is selected for the Content Model Scope.
  3. Click on the "Classification Tester" tab.

  1. Right click the document you want to train and then hover over the "Classification" option.
  2. Click "Train As...".
  1. When the "Train As" window pops up, use the drop down to select which Document Type you want to train the document as, and click "Execute".

  1. After training the document, select all of the documents, right click, and hover over the "Classification" option again.
  2. Click "Classify...".
  1. When the "Classify" window pops up, select the correct Content Model from the Content Type drop down box and click "Execute".

  1. All of the documents should now be appropriately classified after only training one document.
    • The features that the CLAS - Unigrams (and it's children) collected are highly prominant on these documents, therefore they have a high feature count and easily classify this type of document.
  1. Click on the "MOR" Document Type.
  2. If you click on the weightings tab, you can see the features that were collected and how they are weighted when classifying.


Glossary

Batch Process: settings Batch Process nodes are crucial components in Grooper's architecture. A Batch Process is the step-by-step processing instructions given to a inventory_2 Batch. Each step is comprised of a "Code Activity" or a Review activity. Code Activities are automated by Activity Processing services. Review activities are executed by human operators in the Grooper user interface.

  • Batch Processes by themselves do nothing. Instead, they execute edit_document Batch Process Steps which are added as children nodes.
  • A Batch Process is often referred to as simply a "process".

Batch: inventory_2 Batch nodes are fundamental in Grooper's architecture. They are containers of documents that are moved through workflow mechanisms called settings Batch Processes. Documents and their pages are represented in Batches by a hierarchy of folder Batch Folders and contract Batch Pages.

Classification Method:

Classification: Classification is the process of identifying and organizing documents into categorical types based on their content or layout. Classification is key for efficient document management and data extraction workflows. Grooper has different methods for classifying documents. These include methods that use machine learning and text pattern recognition. In a Grooper Batch Process, the Classify Activity will assign a Content Type to a folder Batch Folder.

Classify: unknown_document Classify is an Activity that "classifies" folder Batch Folders in a inventory_2 Batch by assigning them a description Document Type.

  • Classification is key to Grooper's document processing. It affects how data is extracted from a document (during the Extract activity) and how Behaviors are applied.
  • Classification logic is controlled by a Content Model's "Classify Method". These methods include using text patterns, previously trained document examples, and Label Sets to identify documents.

Content Model: stacks Content Model nodes define a classification taxonomy for document sets in Grooper. This taxonomy is defined by the collections_bookmark Content Categories and description Document Types they contain. Content Models serve as the root of a Content Type hierarchy, which defines Data Element inheritance and Behavior inheritance. Content Models are crucial for organizing documents for data extraction and more.

Content Type: Content Types are a class of node types used used to classify folder Batch Folders. They represent categories of documents (stacks Content Models and collections_bookmark Content Categories) or distinct types of documents (description Document Types). Content Types serve an important role in defining Data Elements and Behaviors that apply to a document.

Data Type: pin Data Types are nodes used to extract text data from a document. Data Types have more capabilities than quick_reference_all Value Readers. Data Types can collect results from multiple extractor sources, including a locally defined extractor, child extractor nodes, and referenced extractor nodes. Data Types can also collate results using Collation Providers to combine, sift and manipulate results further.

Document Type: description Document Type nodes represent a distinct type of document, such as an invoice or a contract. Document Types are created as child nodes of a stacks Content Model or a collections_bookmark Content Category. They serve three primary purposes:

  1. They are used to classify documents. Documents are considered "classified" when the folder Batch Folder is assigned a Content Type (most typically, a Document Type).
  2. The Document Type's data_table Data Model defines the Data Elements extracted by the Extract activity (including any Data Elements inherited from parent Content Types).
  3. The Document Type defines all "Behaviors" that apply (whether from the Document Type's Behavior settings or those inherited from a parent Content Type).

Execute: tv_options_edit_channels Execute is an Activity that runs one or more specified object commands. This gives access to a variety of Grooper commands in a settings Batch Process for which there is no Activity, such as the "Sort Children" command for Batch Folders or the "Expand Attachments" command for email attachments.

Extract: export_notes Extract is an Activity that retrieves information from folder Batch Folder documents, as defined by Data Elements in a data_table Data Model. This is how Grooper locates unstructured data on your documents and collects it in a structured, usable format.

Lexical: "Lexical" is a Classify Method that classifies folder Batch Folders based on the text content of trained document examples. This is achieved through the statistical analysis of word frequencies that identify description Document Types.

Output Extractor Key: The Output Extractor Key property is another weapon in the arsenal of powerful Grooper classification techniques. It allows pin Data Types to return results normalized in a way more beneficial to document classification.

Project: package_2 Projects are the primary containers for configuration nodes within Grooper. The Project is where various processing objects such as stacks Content Models, settings Batch Processes, profile objects are stored. This makes resources easier to manage, easier to save, and simplifies how node references are made in a Grooper Repository.

Reference: Reference is a Value Extractor used to reference an Extractor Node. This allows users to create re-usable extractors and use the more complex pin Data Type and input Field Class extractors throughout Grooper.

Scope: The Scope property of a edit_document Batch Process Step, as it relates to an Activity, determines at which level in a inventory_2 Batch hierarchy the Activity runs.