Lexical (Classify Method): Difference between revisions
Configadmin (talk | contribs) Created page with "The Lexical Classification Method classifies documents according to their text content, obtained from OCR or extracted native PDF text. It uses a Training-Based App..." |
Dgreenwood (talk | contribs) No edit summary |
||
| Line 1: | Line 1: | ||
The Lexical [[Classification Method]] classifies documents according to their text content, obtained from [[OCR]] or extracted native PDF text. It uses a [[Training-Based Approach]] to teach Grooper to classify a document from trained examples of the Document Type. | <blockquote style="font-size:14pt"> | ||
The ''Lexical'' '''''[[Classification Method]]''''' is one of three methods of classifying documents available to Grooper. This method classifies documents according to their text content, obtained from [[OCR]] or extracted native PDF text (via the [[Recognize]] activity). It uses a [[Training-Based Approach]] to "teach" Grooper to classify a document from trained examples of the Document Type. | |||
</blockquote> | |||
A [[Text Feature Extractor]] is set to extract values from document samples to be used as identifiable features of the document (such as words or phrases). These features are given weightings according to the [[TF-IDF]] algorithm. Features are given a higher weighting the more they appear on a document, mitigated by if that feature is common to multiple [[Document Type]]s. During a Classify activity, the features of an unclassified document are compared to the weighted features of the trained [[Document Type]]s. The document is assigned the [[Document Type]] it is most similar to. | Classification is then performed using the [[Classify]] activity, using the trained examples and ''Lexical'' property configuration on a '''[[Content Model]]'''. | ||
== About == | |||
''Lexical'' classification can be enabled and configured on any '''Content Model''' object. To do so, select the '''''Classification Method''''' property and select ''Lexical''. | |||
[[File:Lexical-classification-01.png|center|1000px]] | |||
=== What are you classifying? - Document Types === | |||
As mentioned before, ''Lexical'' classification is a training-based approach. Generally speaking, a training-based approach is one where examples of a document to classify more documents as one or another. Essentially, the whole point is to distinguish one type of document from another. | |||
This may be obvious, but before you can give examples of what one type of document looks like, you have to give a ''name'' to that type of document you're wanting to classify. In Grooper, we do this by adding '''Document Type''' objects to a '''Content Model''' | |||
For example, imagine you have a collection of human resources documents. For each employee, you'll have a variety of different kinds of documents in their HR file, such as a federal W-4 form, their employment application, various documents pertaining to their health insurance enrollment, and more. In order to distinguish those documents from one another (in other words, classify them), you will need to add a '''Document Type''' for each kind of document. | |||
If you have four kinds of documents seen here: A federal W-4, an employee data sheet, an FSA enrollment form, and a pension enrollment form... | |||
=== How are documents classified? - TF-IDF === | |||
[[TF-IDF]] stands for "Term Frequency-Inverse Document Frequency". | |||
=== How are documents classified? - Text Features === | |||
A [[Text Feature Extractor]] is set to extract values from document samples to be used as identifiable features of the document (such as words or phrases). These features are given weightings according to the [[TF-IDF]] algorithm. Features are given a higher weighting the more they appear on a document (Term Frequency), mitigated by if that feature is common to multiple [[Document Type]]s (Inverse Document Frequency). During a [[Classify]] activity, the features of an unclassified document are compared to the weighted features of the trained [[Document Type]]s. The document is assigned the [[Document Type]] it is most similar to. | |||
=== Mixed Classification: Combining Training-Based and Rules-Based Approaches === | |||
Furthermore, a [[Rules-Based Approach]] can be taken in combination with the training based approach. This can be done by setting a [[positive extractor]] on the [[Document Type]]. If the extractor yields a result, the document will be classified as that type without being compared to training examples. This way, if you have a value that can be extracted that you know is going to be on a [[Document Type]] (such as a header title), you can take advantage of setting a positive extractor on the [[Document Type]] to classify them. But, if that extractor fails for whatever reason, you have training data which can act as a backup classification. | Furthermore, a [[Rules-Based Approach]] can be taken in combination with the training based approach. This can be done by setting a [[positive extractor]] on the [[Document Type]]. If the extractor yields a result, the document will be classified as that type without being compared to training examples. This way, if you have a value that can be extracted that you know is going to be on a [[Document Type]] (such as a header title), you can take advantage of setting a positive extractor on the [[Document Type]] to classify them. But, if that extractor fails for whatever reason, you have training data which can act as a backup classification. | ||
Revision as of 10:47, 6 October 2020
The Lexical Classification Method is one of three methods of classifying documents available to Grooper. This method classifies documents according to their text content, obtained from OCR or extracted native PDF text (via the Recognize activity). It uses a Training-Based Approach to "teach" Grooper to classify a document from trained examples of the Document Type.
Classification is then performed using the Classify activity, using the trained examples and Lexical property configuration on a Content Model.
About
Lexical classification can be enabled and configured on any Content Model object. To do so, select the Classification Method property and select Lexical.

What are you classifying? - Document Types
As mentioned before, Lexical classification is a training-based approach. Generally speaking, a training-based approach is one where examples of a document to classify more documents as one or another. Essentially, the whole point is to distinguish one type of document from another.
This may be obvious, but before you can give examples of what one type of document looks like, you have to give a name to that type of document you're wanting to classify. In Grooper, we do this by adding Document Type objects to a Content Model
For example, imagine you have a collection of human resources documents. For each employee, you'll have a variety of different kinds of documents in their HR file, such as a federal W-4 form, their employment application, various documents pertaining to their health insurance enrollment, and more. In order to distinguish those documents from one another (in other words, classify them), you will need to add a Document Type for each kind of document.
If you have four kinds of documents seen here: A federal W-4, an employee data sheet, an FSA enrollment form, and a pension enrollment form...
How are documents classified? - TF-IDF
TF-IDF stands for "Term Frequency-Inverse Document Frequency".
How are documents classified? - Text Features
A Text Feature Extractor is set to extract values from document samples to be used as identifiable features of the document (such as words or phrases). These features are given weightings according to the TF-IDF algorithm. Features are given a higher weighting the more they appear on a document (Term Frequency), mitigated by if that feature is common to multiple Document Types (Inverse Document Frequency). During a Classify activity, the features of an unclassified document are compared to the weighted features of the trained Document Types. The document is assigned the Document Type it is most similar to.
Mixed Classification: Combining Training-Based and Rules-Based Approaches
Furthermore, a Rules-Based Approach can be taken in combination with the training based approach. This can be done by setting a positive extractor on the Document Type. If the extractor yields a result, the document will be classified as that type without being compared to training examples. This way, if you have a value that can be extracted that you know is going to be on a Document Type (such as a header title), you can take advantage of setting a positive extractor on the Document Type to classify them. But, if that extractor fails for whatever reason, you have training data which can act as a backup classification.