2023:Lexical (Classify Method)

From Grooper Wiki
Revision as of 09:55, 27 August 2024 by Randallkinard (talk | contribs)

This article is about an older version of Grooper.

Information may be out of date and UI elements may have changed.

202520232021

"Lexical" is a Classify Method that classifies folder Batch Folders based on the text content of trained document examples. This is achieved through the statistical analysis of word frequencies that identify description Document Types.

Classification is then performed using the Classify activity, using the trained examples and Lexical property configuration on a Content Model.

About

Lexical classification can be enabled and configured on any Content Model object. To do so, select the Classification Method property and select Lexical.



What are you classifying? - Document Types

As mentioned before, Lexical classification is a training-based approach. Generally speaking, a training-based approach is one where examples of a document to classify more documents as one or another. Essentially, the whole point is to distinguish one type of document from another.

This may be obvious, but before you can give examples of what one type of document looks like, you have to give a name to that type of document you're wanting to classify. In Grooper, we do this by adding Document Type objects to a Content Model

For example, imagine you have a collection of human resources documents. For each employee, you'll have a variety of different kinds of documents in their HR file, such as a federal W-4 form, their employment application, various documents pertaining to their health insurance enrollment, and more. In order to distinguish those documents from one another (in other words, classify them), you will need to add a Document Type for each kind of document.

Take the four kinds of documents seen here: A federal W-4, an employee data sheet, an FSA enrollment form, and a pension enrollment form

Federal W-4 Employee Data Sheet FSA Enrollment Form Pension Enrollment Form


If we want to classify a Batch of these documents and assign the W-4 documents a "W-4" classification and so on, we would need to create a Content Model and add one Document Type for each kind of document.

A Content Model is how we determine the taxonomy of our documents set. Taxonomy is just a fancy word for a classification scheme. Zoological taxonomy organizes organisms into a classification scheme, from domain all the way down to species. We do much the same thing with documents and a Content Model.

The whole set of HR documents belong to the top level in the hierarchy, the Content Model itself. Each individual kind of document are represented by Document Types, which are next level down in that hierarchy. Each one is distinct from each other, but still part of the Content Model's scope. Just like insects, spiders, and lobsters are distinct from each other but are all part of the "arthropod" zoological class.

How are documents classified? - Trained Examples

The Lexical method uses trained examples for each Document Type in order to classify Batches. During the Classify activity, unclassified documents are compared to trained examples of the Document Types in a Content Model. The document will be assigned the Document Type it is most similar to.

You can train documents using the "Classification Testing" tab of a "Classify" Batch Process Step (We will go into this more in depth in the How To section of this article).

You then train a document by right-clicking on the document you wish to train, hovering over "Classify", and clicking "Train As..." when it pops up.

So, for this example, we've selected a W-4 form.

When the new "Train As" window pops up, you can click the hamburger icon to the right of the Content Type property and select the Content Model and then the Document Type from the drop down menu.

Here we classified this document as a "Federal W-4".

This will create two new levels of hierarchy in your Content Model. Training a document will create a Form Type of that document as a child of the Document Type assigned. The Form Type will have its own Page Type children corresponding to each page of the trained document.

You will create multiple Form Types for multiple trained examples of documents of varying lengths. You will create a 2-Page Form Type for documents of two pages in length (with two Page Type child objects), a 1-Page Form Type for single page documents (with a single Page Type object), a 10-Page Form Type for ten page documents (with ten Page Type children).

What is being trained? - Text Features

When it comes time to compare unclassified documents to trained examples, specifically what is compared is the lexical content of the documents. In other words, words. Documents use language to convey information. Words and phrases are features of what makes one document distinct from another. Words used in the documents one Document Type will share some meaningful similarities, which will be different from the language of another Document Type.

In order to find this lexical content, you first need to set a Text Feature Extractor. A Text Feature Extractor is set to extract text-based values from document samples to be used as identifiable features of the document.

Commonly, the extractor used here locates unigrams (single words), bigrams (two word phrases) or trigrams (three word phrases) as the features. However, a Text Feature Extractor is highly configurable, allowing you to use lexicons specific to your document set, exclude text from portions of a document from training, even use tokenized features of non-lexical text, and more.

This is the first thing you will do when configuring Lexical classification. If you're training the words in a document, you need to tell Grooper how to find those words first! After Lexical is chosen as the Classification Method of a Content Model, the Text Feature Extractor can be set in the Lexical sub-properties. This can be a Reference to a Data Type or an Internal regular expression pattern.

FYI

Any Data Type can be a Text Feature Extractor. You can customize this extractor however best suits your document classification needs. However, there are a few pre-built feature extractors that ship with every Grooper install. You can find them in the Data Extraction folder and the following folder path: Data Types > Downloads > Features.

How are features trained? - TF-IDF

TF-IDF stands for "Term Frequency-Inverse Document Frequency". It is a numerical statistic intended to reflect how important a word is to a document within a collection (or document set or “corpus”). This “importance” is assigned a numerical weighting value. The higher the word’s weighting value, the more relevant it to that document in the corpus (or how similar it is to a Document Type for our purposes).

Text features (extracted from the Text Feature Extractor) are given weightings according to the TF-IDF algorithm. Features are given a higher weighting the more they appear on a document (Term Frequency), mitigated by if that feature is common to multiple Document Types (Inverse Document Frequency). Some words appear more generally across documents and hence are less unique identifiers. So, their weighting is lessened.

During a Classify activity, the features of an unclassified document are compared to the weighted features of the trained Document Types. The document is assigned the Document Type it is most similar to.

For a more in depth explanation of TF-IDF, visit the TF-IDF article.

Mixed Classification: Combining Training-Based and Rules-Based Approaches

Furthermore, a Rules-Based approach can be taken in combination with the training based approach, when using the Lexical Classification Method. This can be done by setting a Positive extractor on the Document Type object of a Content Model. If the extractor yields a result, the document will be classified as that Document Type. Generally, this will "win out" over the training weightings, because the Positive Extractor's confidence result (as a percentage value) will be higher than the document's similarity to the trained examples (as a percentage value) for a Document Type.

This way, if you have a value that can be extracted that you know is going to be on a Document Type, you can take advantage of setting a Positive Extractor on the Document Type to classify them. For example, document titles are often used as "rules". If you can extract text to match a title to a corresponding Document Type, this is often a quick and easy way to classify a document. But, if that extractor fails for whatever reason (because of bad OCR or a new title not matching the extractor's regex), you have training data which can act as a backup classification method.

Many of the best classification strategies involve combining the training-based Lexical method with a rules-based approach.

Glossary

Batch Folder: The folder Batch Folder is an organizational unit within a inventory_2 Batch, allowing for a structured approach to managing and processing a collection of documents. Batch Folder nodes serve two purposes in a Batch. (1) Primarily, they represent "documents" in Grooper. (2) They can also serve more generally as folders, holding other Batch Folders and/or contract Batch Page nodes as children.

  • Batch Folders are frequently referred to simply as "documents" or "folders" depending on how they are used in the Batch.

Batch Process Step: edit_document Batch Process Steps are specific actions within a settings Batch Process sequence. Each Batch Process Step performs an "Activity" specific to some document processing task. These Activities will either be a "Code Activity" or "Review" activities. Code Activities are automated by Activity Processing services. Review activities are executed by human operators in the Grooper user interface.

  • Batch Process Steps are frequently referred to as simply "steps".
  • Because a single Batch Process Step executes a single Activity configuration, they are often referred to by their referenced Activity as well. For example, a "Recognize step".

Batch: inventory_2 Batch nodes are fundamental in Grooper's architecture. They are containers of documents that are moved through workflow mechanisms called settings Batch Processes. Documents and their pages are represented in Batches by a hierarchy of folder Batch Folders and contract Batch Pages.

Classification Method:

Classification: Classification is the process of identifying and organizing documents into categorical types based on their content or layout. Classification is key for efficient document management and data extraction workflows. Grooper has different methods for classifying documents. These include methods that use machine learning and text pattern recognition. In a Grooper Batch Process, the Classify Activity will assign a Content Type to a folder Batch Folder.

Classify: unknown_document Classify is an Activity that "classifies" folder Batch Folders in a inventory_2 Batch by assigning them a description Document Type.

  • Classification is key to Grooper's document processing. It affects how data is extracted from a document (during the Extract activity) and how Behaviors are applied.
  • Classification logic is controlled by a Content Model's "Classify Method". These methods include using text patterns, previously trained document examples, and Label Sets to identify documents.

Content Model: stacks Content Model nodes define a classification taxonomy for document sets in Grooper. This taxonomy is defined by the collections_bookmark Content Categories and description Document Types they contain. Content Models serve as the root of a Content Type hierarchy, which defines Data Element inheritance and Behavior inheritance. Content Models are crucial for organizing documents for data extraction and more.

Content Type: Content Types are a class of node types used used to classify folder Batch Folders. They represent categories of documents (stacks Content Models and collections_bookmark Content Categories) or distinct types of documents (description Document Types). Content Types serve an important role in defining Data Elements and Behaviors that apply to a document.

Data Extraction: Data Extraction involves identifying and capturing specific information from documents (represented by folder Batch Folders in Grooper). Extraction is performed by configurable Data Extractors, which transform unstructured or semi-structured data into a structured, usable format for processing and analysis.

Data Type: pin Data Types are nodes used to extract text data from a document. Data Types have more capabilities than quick_reference_all Value Readers. Data Types can collect results from multiple extractor sources, including a locally defined extractor, child extractor nodes, and referenced extractor nodes. Data Types can also collate results using Collation Providers to combine, sift and manipulate results further.

Document Type: description Document Type nodes represent a distinct type of document, such as an invoice or a contract. Document Types are created as child nodes of a stacks Content Model or a collections_bookmark Content Category. They serve three primary purposes:

  1. They are used to classify documents. Documents are considered "classified" when the folder Batch Folder is assigned a Content Type (most typically, a Document Type).
  2. The Document Type's data_table Data Model defines the Data Elements extracted by the Extract activity (including any Data Elements inherited from parent Content Types).
  3. The Document Type defines all "Behaviors" that apply (whether from the Document Type's Behavior settings or those inherited from a parent Content Type).

Extract: export_notes Extract is an Activity that retrieves information from folder Batch Folder documents, as defined by Data Elements in a data_table Data Model. This is how Grooper locates unstructured data on your documents and collects it in a structured, usable format.

Form Type: two_pager Form Types represent trained variations of a description Document Type. These nodes store machine learning training data for Lexical and Visual document classification methods.

Lexical: "Lexical" is a Classify Method that classifies folder Batch Folders based on the text content of trained document examples. This is achieved through the statistical analysis of word frequencies that identify description Document Types.

OCR: OCR is stands for Optical Character Recognition. It allows text on paper documents to be digitized, in order to be searched or edited by other software applications. OCR converts typed or printed text from digital images of physical documents into machine readable, encoded text.

Reference: Reference is a Value Extractor used to reference an Extractor Node. This allows users to create re-usable extractors and use the more complex pin Data Type and input Field Class extractors throughout Grooper.

Rules-Based: "Rules-Based" is a Classify Method that employs "rules" defined on each description Document Type to classify folder Batch Folders. Positive Extractor and Negative Extractor properties are configured for each Document Type to positively or negatively associate a Batch Folder based on predefined criteria.

  • Where the Positive and Negative Extractors will impact all Classify Method results, the Rules-Based method classifies using only these properties and nothing else.

TF-IDF: TF-IDF stands for term frequency-inverse document frequency. It is a statistical calculation intended to reflect how important a word is to a document within a document set (or "corpus"). It is how Grooper uses machine learning for training-based document classification (via the Lexical method) and data extraction (via the input Field Class extractor).