Labeling Behavior (Behavior)

From Grooper Wiki

This article was migrated from an older version and has not been updated for the current version of Grooper.

This tag will be removed upon article review and update.


This article is about the current version of Grooper.

Note that some content may still need to be updated.

2025 20232021

A Labeling Behavior is a Content Type Behavior designed to collect and utilize a document's field labels in a variety of ways. This includes functionality for classification, field extraction, table extraction, and section extraction.

The Labeling Behavior functionality allows Grooper users to quickly onboard new Document Types for structured and semi-structured forms, utilizing labels as a thumbprint for classification and data extraction purposes. Once the Labeling Behavior is enabled, labels are identified and collected using the "Labels" tab of Document Types. These "Label Sets" can then be used for the following purposes:

  • Document classification - Using the Labelset-Based Classification Method
  • Field based data extraction - Primarily using the Labeled Value Extractor Type
  • Tabular data extraction - Primarily using a Data Table object's Tabular Layout Extract Method
  • Sectional data extraction - Primarily using a Data Section object's Transaction Detection Extract Method

FYI

The Labeling Behavior and its functionality discussed in this article are often referred to as "Label Set Behavior" or simply "Label Sets".

You may download the ZIP(s) below and upload it into your own Grooper environment (version 2023). The first contains one or more Batches of sample documents. The second contains one or more Projects with resources used in examples throughout this article.

About

Labels serve an important function on documents. They give the reader critical context to understand where data is located and what it means. How do you know the difference between the date on an invoice document indicating when the invoice was sent and the date indicating when you should pay the invoice? It's the labels. The labels are what distinguishes one type of date from another. For example, "Invoice Date" for the date the invoice was sent and "Due Date" for the date you need to pay by.

Labels can be a way of classifying documents as well. What does one individual label tell you about a document? Well, maybe not much. However, if you take them all together, they can tell you quite a bit about the kind of document you're looking at. For example, a W-4 employee withholding form is going to use different labels than an employee healthcare enrollment form. These are two very different documents collecting very different information. The labels used to collect this information are thus different as well.

Furthermore, you can even tell the difference between two very closely related documents using labels as well. For example, two different invoices from two different vendors may share some similarity in the labels they use to detail information. But there will be some differences as well. These differences can be useful identifiers to distinguish one from the other. Put all together, labels can act as a thumbprint Grooper can use to classify a document as one Document Type or another.

Even though these two invoices share some labels (highlighted in blue), there are others that are unique to each one (highlighted in yellow). This awareness of how one kind of invoice from one vendor uses labels differently from another can give you a method of classifying these documents using their label sets.


The Labeling Behavior is built on these concepts, collecting and utilizing labels for Document Types in a Content Model for classification and data extraction purposes.

As a Behavior, the Labeling Behavior is enabled on a Content Type object in Grooper.

While you can enable Labeling Behavior on any Content Type, in almost all cases, you will want to enable this Behavior on the Content Model.

Typically, you want to collect and use label sets for multiple Document Types in the Content Model, not just one Document Type individually. Enabling the Behavior on the Content Model will enable the Labeling Behavior for all child Document Types, allowing you to collect and utilize labels for all Document Types.

  1. Here, we have selected a Content Model in the Node Tree.
  2. To add a Behavior, select the Behaviors property and press the ellipsis button at the end.

  1. This will bring up a dialogue window to add various behaviors to the Content Model, including the Labeling Behavior
  2. Add the Labeling Behavior using the "Add" button.
  3. Select Labeling Behavior from the listed options.

  1. Once added, you will see a Labeling Behavior item added to the Behaviors list.
  2. Selecting the Labeling Behavior in the list, you will see property configuration options in the right panel.
    • The configuration options in the property panel pertain to fuzzy matching collected labels as well as constrained and vertical wrapping capabilities to target stacked labels.
    • By default, Grooper presumes you will want to use some fuzzy matching and enable constrained and vertical wrapping. These defaults work well for most use cases. However, you can adjust these properties here as needed.
  3. Press the "OK" button to finish adding the Labeling Behavior and exit this window.

  1. Now on the Content Model tab you should see a Behavior is set.
  2. Save your changes

Once the Labeling Behavior is enabled, the next big step is collecting label sets for the various Document Types in your Content Model.

  1. With the Labeling Behavior enabled, you will now see a "Labels" tab present for the Content Model.
    • This tab is also now present for each individual Document Type as well.
  2. Label sets are collected in this tab for each Document Type in the Content Model.

Each Document Type has its own set of labels used to define information on the document. For example, the "Factura" Document Type in this Content Model uses the label "PO Number" to call out the purchase order number on this invoice document. A different Document Type, corresponding to a different invoice format, might use a different label such as "Purchase Order Number" or "PO #".

  1. Ultimately, this is the data we want to collect using the Content Model's Data Model.
  2. We use the "Labels" tab to collect labels corresponding to the various Data Elements (Data Fields, Data Tables, and Data Sections) of the Data Model.
    • This provides a user interface to enter a label identifying the value you wish to collect for the Data Elements.
  3. For example, the label "PO Number" identifies the purchase order number for this invoice.
  4. Therefore, the label "PO Number" is collected for the "Purchase Order Number" Data Field in the Data Model.

For more information on collecting label sets for the Document Types in your Content Model see the How To section of this article.

In Grooper 2023, for labels to show up in the Labels tab of the Content Model, a Label Set aware Value Extractor (such as Labeled Value or Tabular Layout) must be set on your Data Fields and Data Tables.

Once label sets are collected for each Document Type, they can be used for classification and data extraction purposes.

For example, labels were used in this case to:

  1. Classify the documents, assigning each document the appropriate Document Type.
  2. Extract all the Data Fields seen here, collecting field based data from the document.

  1. Extract the "Line Items" Data Table, collecting the tabular data seen here.

For more information on how to use labels for these purposes, see the How To section of this article.

"How to" tutorials

The Labeling Behavior (often referred to as "Label Set Behavior" or just "Label Sets") are well suited for structured and semi-structured document sets. Label Sets are particularly useful for situations where you have multiple variations for one kind of document or another. While the information you want to extract from the document set may be the same from variation to variation, how the data is laid out and labeled may be very different from one variation of the document to another. Label Sets allow you to quickly onboard new Document Types to capture new form structures.

We will use invoices for the document set in the following tutorials.

In a perfect world, you'd create a Content Model with a single "Invoice" Document Type whose Data Model would successfully extract all Data Elements for all invoices from all vendors every time no matter what.

This is often not the case. You may find you need to add multiple Document Types to account for variations of an invoice from multiple vendors. Label Sets give you a method of quickly adding to Document Types to model new variations. In our case, we will presume we need to create one Document Type for each vendor.

We will start with five Document Types for invoices from five vendors.

  • Factura
  • Lasku
  • Envoy
  • Rechnung
  • Arve

You may download and import the file below into your own Grooper environment (version 2021). This contains the Batch(es) with the example document(s) discussed in this tutorial and the Content Model(s) configured according to the instructions.

How to collect label sets

Navigate to the Labels editor UI

Collecting labels for the Document Types in your Content Model will be the first thing you want to do after enabling the Labeling Behavior. Labels for each Data Element in the Document Type's Data Model are defined using the "Labels" tab of the Content Model.

  1. Select the desired Content Model.
  2. Navigate to the "Labels" tab.
  3. With a Batch selected in the "Test Batch" window panel, select a document folder.

  1. Right click on the folder.
  2. Click "Assign Document Type.."

  1. When the "Assign Document Type" window pops up, click the hamburger button on the right side of the Content Type property.
  2. From the drop down, find the Content Model you are using and select the correct Document Type for the document.

  1. Click "Execute" to assign the Document Type.

FYI

If you haven't added a Document Type for the selected document folder yet, you can click the "+" (plus sign) button at the top of the Labels UI to both create a Document Type and assign it to the document that is currently selected.

  1. Now that the Document Type has been set on the document, we have a couple of elements showing in our "LABELS" UI. However, we are not seeing any Data Fields or the Data Table objects.
  2. The eye icon at the top of the "Labels" tab hides all fields that do not have a label set on them by default. Click the eye button to show these fields.

  1. Now all objects are showing in the "LABELS" UI.

If the objects still are not showing in the "LABELS" UI, go through your Data Fields and Data Tables and make sure a "label set aware" extractor has been set on the object such as Labeled Value or Tabular Layout. In Grooper 2023, objects will only show up in the "LABELS" UI if a label set aware extractor has been set on the object.

Collect field labels

Now that this document has been classified (assigned a Document Type from our Content Model), we can collect labels for its Document Type. This can be done in one of three ways:

  1. Lassoing the label in the "Document Viewer".
  2. Double-clicking the label in the Document Viewer.
  3. Typing the label in manually.

!!

Going forward, this tutorial presumes you have obtained machine readable text from these documents, either OCR'd text or native text, via the Recognize activity.

Generally the quickest way is by simply lassoing the label in the "Document Viewer".

  1. Select the Data Element whose label you wish to collect.
    • Here, we are selecting the "Invoice Number" Data Field.
  2. Click the "Rubberband Label" button.
  3. With your cursor, lasso around the text label on the document.

  1. Upon lassoing the label in the Document Viewer, the OCR'd or native text behind the selected region will be used to populate the Data Element's label.
    • At this point, the label for the "Invoice Number" Data Field is now "Invoice Number" because that's the text data we selected. Whatever text characters you lasso with your cursor will be assigned as the label.

If you choose, you may also manually enter a label for a Data Element by simply typing it into the text box.

  1. Here we've selected the "Purchase Order Number" Data Field and entered "PO Number".
  2. This will correspond to the label "PO Number" on the document itself.

  1. If you type in the label incorrectly with typos and the label is unable to match anything on the document, an error icon will appear next to the label rather than a check mark.

An error icon will appear any time whatever is in the label field does not match with any OCRed data on the document. This can be due to a typo on the label or OCR error.

  1. Continue lassoing or manually entering labels until all are collected.
  2. Next, we will focus on collecting labels from tables and table columns (the Data Table and Data Column elements in a Data Model). The process is essentially the same, but bears some extra explanation.


Collect table and column labels

Table and column labels can be used for tabular data extraction as well, setting a Data Table object to use the Tabular Layout Extract Method.

When collecting labels for this method of table extraction, keep in mind you must collect the individual column headers, and may optionally collect both the full row of column header labels as well.

While it is optional, it is generally regarded as best practice to capture the full row of column header labels. This will generally increase the accuracy of your column label extraction. We will do both in this tutorial.

  1. We will collect the full row of column header labels for the Data Table object's label.
  2. We will collect each individual column header label for each individual Data Column object's label.

This may seem like you are duplicating your efforts but it is often critical to do both in order for the Tabular Layout Extract Method to map the table's structure and ultimately collect the table's data.

  • In particular if you are dealing with OCR text data containing inaccurate character recognition data, establishing the full header row for the table will boost the fuzzy matching capabilities of the Labeling Behavior.

  1. To collect the Data Table's label, select the Data Table object in the Labels tab.
    • Here, we've selected the Data Table named "Line Items".
  2. Lasso the entire header row for the table.
    • You may notice there are more columns on this table than we are collecting. As it is on the document, the table has six columns. But we're only collecting four, the "Quantity", "Description", "Unit Price", and "Line Total" Data Columns.
    • Generally, you should collect the whole row of column headers, even if there are extra columns whose data you are not collecting.

  1. Next, collect each child Data Column's header label.
    • Here, we've selected the "Quantity" Data Column.
  2. Lasso the individual column header for the selected Data Column.
    • Here, the stacked label, "Qty. Ord.".

  1. Continue collecting labels for the remaining Data Columns.
  2. We have four Data Columns for this Data Table. Therefore, we collect four header labels from the document.

Auto Map labels

As you add labels for each Document Type, you may find some documents have labels in common. For example, there are only so many ways to label an invoice number. It might be "Invoice Number", "Invoice No", "Invoice #" or even just "Invoice". Some invoices are going to use one label, others another.

When collecting labels for multiple Document Types you can use the "Auto Map" feature to automatically add labels you've previously collected on another Document Type.

  1. So far, we've only collected labels for one Document Type, the "Factura" Document Type.
  2. Now, we're collecting labels for the "Lasku" Document Type.
  3. Press the "Auto Map" button to automatically assign previously collected labels,

Grooper will search the document's text for labels matching those previously collected on other Document Types.

  1. For example, we collected the label "Remit To:" for the "Remit Address" Data Field for the "Factura" Document Type. The "Auto Map" feature found a match for this label on the document and assigned the "Lasku" Document Type's "Remit Address" Data Field the same label.

If a match is not found, the Data Element's label is left blank.

  1. For example, the label for the "Invoice Amount" Data Field for the "Factura" Document Type was "Amount due".
  2. This label was nowhere to be found on this document. The invoice amount is labeled "Total" on the "Lasku" documents. So, the label is left blank for you to collect.

As you keep collecting labels for more and more Document Types, the Auto Map feature will pick up more and more labels, allowing you to quickly onboard new Document Types.

Be aware, you may still need to validate the auto mapped values and make adjustments.

  1. For example, the label "Date" is very generic.
  2. This label does actually correspond to the invoice date on the "Lasku" Document Type in this case.
  3. However, that could label some other date on another Document Type. Even on this document, the label "Date" is returning the "Date" portion of "Ship Date" and other instances where "Date" is found in the text.
    • As a side note, there are ways to make simple labels like "Date" more specific to the data they pertain to using "Custom Labels". More on that in the next tab.
  4. You can also make minor adjustments to the mapped labels.
    • The mapped label for the "Purchase Order Number" Data Field was "PO Number" (as it was collected for the "Factura" Document Type), but it is more specifically "PO Number:" on the "Lasku" documents. We can just add the colon at the end of the label manually.

Collect custom labels

It's important to keep in mind labels are collected for corresponding Data Elements in a Data Model. You collect one label per Data Element (Data Field, Data Section, Data Table or Data Column). What if you want to collect a label that is distinct from a Data Element, one that doesn't necessarily have to do with a value collected by your Data Model? And why would you even want to?

That's what "Custom Labels" are for. Custom labels serve two primary functions:

  1. Providing additional labels for classification purposes.
  2. Providing context labels when a Data Element's label matches multiple points on a document

Custom Labels may only be added to Data Model, Data Section or Data Table objects' labels. Put another way, any Data Element in the Data Model's hierarchy that can have child Data Elements can have custom labels.

When used for classification purposes, custom labels are typically added to the Data Model itself.

  1. First select the Data Element in the Data Model's hierarchy to which you wish to add the label.
    • In our case, we're selecting the Data Model itself.
  2. Click the "Add Label" icon at the top of the "Label" tab.

  1. When the small window pops up, enter a name for your custom label and click "Add Custom".

  1. This will add a new label tab, named whatever you named it in the previous step.
    • Here, we just named it "Custom 01".
  2. Collect the custom label by either lassoing the text using the Document Viewer or manually typing in the label.
    • For example, the word "Invoice" might be a useful label for classification purposes. This label isn't used to collect anything in our Data Model, but might be helpful to identify this and other invoices from the Factura Technology Corp as "Factura" Document Types. Collecting the label "Invoice" as a Custom Label will allow us to use it as a feature of this Document Type for classification.

  1. Now check your custom label for accuracy and OCR errors.

You may add more Custom Labels to the selected Data Element by repeating the process described above.

  1. Click the "Add Labels" icon again.
  2. Name your label and click "Add Custom".

Custom labels as context labels

Some labels are more specific than others. The label "Invoice Date" is more specific than the label "Date". If you see the label "Invoice Date" you know the date you're looking at is the date the invoice was generated. The label "Date" may refer to the invoice's generation date or it could be part of another label like "Due Date". However, some invoice formats will label the invoice date as simply "Date".

  1. For example, the label "Date" on this "Factura" Document Type does indeed correspond to the invoice date for the "Invoice Date" Data Field.
  2. However, this label pops up as part of other labels too, such as the "Date" in "Due Date" or "Order Date".

This can present a challenge for data extraction. The possibilities for false-positive results tend to crop up the more generic the label used to identify a desired value. There are three separate date values identified by the word "Date" (in full or in part) on this document.

This is the second reason Custom Labels are typically added for a Document Type, to provide extra context for generic labels, especially when they produce multiple results on a document, leading to false-positive data extraction.

There are two steps to adding and using a Custom Label for this purpose:

  1. Add the Custom Label.
  2. Marry the Custom Label with the Data Element's label.

We will refer to this type of a Custom Label as a "Context Label" from here out.

The only "trick" to this is adding the Context Label to the appropriate level of the Data Model's hierarchy.

Remember, a Custom Label may only be added to a Data Model, Data Section or Data Table object. We cannot add a Custom Label to a Data Field, such as the "Invoice Number" Data Field.

To add a Context Label a Data Field can use, we must add the Custom Label to its direct parent Data Element.

  1. In the case of the "Invoice Date" Data Field its direct parent Data Element is the Data Model itself.

  1. Click the "Add Label" icon at the top of the "Labels" tab.
  2. Type in a name for your label and click "Add Custom"

  1. We are going to lasso these two labels "Date" and "Page" together to make the custom label.

  1. We can see that the label was collected appropriately.
    • This custom label will provide the simple label "Date" some extra context.
    • Which of the three results for the label "Date" do we want to accept? The one falling within this zone.

Now that we've added the label, we need to marry the Custom Label with the Data Field its giving extra context to. This is done with the Parent property of a Data Field label.

  1. Click on the check mark icon next to the "Invoice Date" Data Field label.

  1. Click on the hamburger icon next to the Parent property.
  2. Using the drop down list, select the Custom Label you wish to use for the Context Label.

  1. Click "SAVE" to save your changes.

  1. Notice with this Context Label added we only return a single result for the "Invoice Date" Data Field's label "Date". This is the label we want to associate with this Data Field.
  2. The other two results do not fall within the Context Label, and are no longer returned.

How to use label sets for classification

About Labelset-Based classification

Labelset-Based is a Classification Method that leverages the labels defined via a Labeling Behavior to classify folder Batch Folders.

Label Sets can be used for classifying documents using the Labelset-Based Classification Method. For structured and semi-structured forms labels end up being a way of identifying a document. Without the field data entered, the labels are really what define the document. You know what kind of document you're looking at based on what kind of information is presented and in the case of Labelset-Based classification how that data is labeled. Even when those labels are very similar from one variant to the next, they end up being a thumbprint of that variant. For example, you might use Labelset-Based classification to create Document Types for different variations of invoices from different vendors. The information presented on each variant from each vendor will be more or less the same, and some labels will be more commonly used by different vendors (such as "Invoice Number"). However, if there is enough variation in the set of labels, you can easily differentiate an invoice from one vendor verses another just based on the variation in labels.

Take these four "documents". Each one is collecting the same information:

  • A person's name
  • Their social security number
  • Their birthday
  • Their phone number
  • Their address

So we might have five Data Fields in our Data Model, one for each piece of information. We'd also collect one label for each Data Field as well.

While the data we want from these documents is the same, there is some variation in the labels used for each different document type. If we wanted to distinguish these four documents from each other by classifying using the Labelset-Based Classification Method. This is all done measuring the similarity between the collected label sets for each Document Type.


How is Document Type "B" different from Document Type "A"?

  • It uses the label SSN: instead of Social Security Number:.

How is Document Type "C" different from Document Type "A"?

  • It uses the labels SSN: instead of Social Security Number: and DOB: instead of Date of Birth:.

How is Document Type "D" different from Document Type "A"?

  • It uses the labels SSN: instead of Social Security Number:, DOB: instead of Date of Birth:, and Phone #: instead of Phone Number.

Using the Labelset-Based Classification Method, unclassified documents are classified by assigning the document the Document Type whose labels are most similar. The basic concept is "similarity" is determined by how many labels are shared between the unclassified document and the label sets collected for the Document Types in your Content Model. The unclassified document is assigned the Document Type with the highest degree of similarity between matched labels and the Document Types' label sets.

The similarity calculation is very straightforward. Grooper searches for labels collected for every Document Type and measures the total character difference between all the labels matched on the document.

If each of these five labels is collected for each Document Type's Label Set, you'd have the following character totals for the set.

  • Document Type "A" - 63 total label characters.
  • Document Type "B" - 44 total label characters.
  • Document Type "C" - 34 total label characters.
  • Document Type "D" - 29 total label characters.

How similar is Document Type "A" to Document Type "B"?

  • "A" uses the label Social Security Number: instead of SSN:
  • However, there is a match for the remaining four labels.
  • The remaining four labels, Name:, Date of Birth, Phone Number: and Address: are comprised of 40 characters.
  • The similarity score is the percentage of matched label characters divided by the total characters in the Document Type's label set.
    • 40 matched label characters / 44 total label characters = 0.9091
    • "A" is roughly 91% similar to "B"

How similar is Document Type "A" to Document Type "C"?

  • "A" uses the label Social Security Number: instead of SSN: and Date of Birth instead of DOB:
  • However, there is a match for the remaining three labels.
  • The remaining three labels, Name:, Phone Number: and Address: are comprised of 26 characters.
  • The similarity score is the percentage of matched label characters divided by the total characters in the Document Type's label set.
    • 26 matched label characters / 34 total label characters = 0.7647
    • "A" is roughly 76% similar to "B"

How similar is Document Type "A" to Document Type "D"?

  • Figure out what labels from "A" match "D", and do the math.

If we ran one of these "documents" into Grooper, we can see these results very clearly.

  1. The document shares all five labels in common with the "A" Document Type.
  2. Grooper searches for labels matching the label sets for all Document Types in the Content Model and creates a similarity score for each one.
    • You can see the math described above play out here. Matching all labels in the "A" Document Type's label set, the document is considered 100% similar. Less so for the other Document Types because while they share some labels (like Name:), some are different (like Social Security Number: versus SSN:
  3. Upon classification, the document folder is assigned the Document Type with the highest similarity score.
    • In this case the "A" Document Type.

Configuring Labelset-Based classification

Next, we will walk through the steps required to enable and configure the Labelset-Based Classification Method, using our example set of invoice documents.

The basic steps are as follows:

  1. Set the Content Model's Classification Method property to Labelset-Based
  2. Collect labels for each Document Type
  3. Test classification
  4. Reconfigure, updating existing Document Types' Label Sets and adding new Document Types as needed.

Assign the Labelset-Based Classification Method

Once you've figured out you want to use Label Sets to classify your documents, you need to tell your Content Model that's what you want to do! This is done by setting the Content Model's Classification Method property to Labelset-Based.

  1. Select a Content Model in the Node Tree.
    • We've selected the "Labeling Behavior - Invoices" Content Model we've been working with in this How To section of the article.
  2. Select the Classification Method property.
  3. Using the dropdown menu, select Labelset-Based from the list of options.

Next, we will collect labels for each Document Type in the Content Model.

  1. Note we've already added a Labeling Behavior to the Behaviors property.
    • It doesn't matter whether you add a Labeling Behavior and/or collect labels before selecting Labelset-Based for the Classification Method' or after.
    • However, you will need to add the Labeling Behavior at some point in order to collect label sets for the Document Types and ultimately use the Labelset-Based method for document classification. Visit the tutorial above if you're unsure how to add the Labeling Behavior to the Content Model.

Collect labels

See the above how to (Collect Label Sets) for a full explanation of how to collect labels for Document Types in a Content Model. The rest of this tutorial will presume you have general familiarity with collecting labels.

  1. Switch to the "Labels" tab.
  2. Collect labels for each Data Element in the Document Type's Data Model.
  3. Collect labels for each Document Type in the Data Model.

Test classification

In general, regardless of the Classification Method used, one of three things is going to happen to Batch Folders in a Batch during classification.

  1. The folder will be assigned the correct Document Type.
  2. The folder will be assigned the wrong Document Type.
  3. The folder will be assigned no Document Type at all.

The Labelset-Based method is no different. If all folders are classified correctly, that's great. However, testing is all about ensuring this is the case and figuring out where and why problems arise when folders are classified wrong or not classified at all.

We will look at a couple examples of how classification can go wrong using the Labelset-Based method, why that is the case, and what to do about it.

FYI

The example Batch in the rest of this tutorial is purposefully small to illustrate a few key points. In the real world, you will want to test using a much larger batch with several examples of each Document Type.

  1. In Grooper 2023 to test Classification, you will need to create a Batch Process and add a "Classify" Batch Process Step.
  2. Make sure your Content Model Scope is set to the appropriate Content Model. For this example, we are using the "Labelset Classification - Invoices - Model" for classification.

  1. Next, go to the "Classification Tester" tab.
  2. Select all of the documents in your batch that you wish to classify.
  3. Right click on the documents, select "Classification" and then click on "Classify".

Now we just need to evaluate the success or failure of our classification. Let's look at a few documents in our Batch before detailing what we will do to resolve any classification errors.

  1. This is a complete success!
    • The Batch Folder has been assigned the "Factura" Document Type.
  2. It indeed should have been classified so. It is an invoice from the Factura Technology Corp.
  3. Its similarity score is 100% similar to the "Factura" Document Type.
    • This means a match has been found for all labels in the "Factura" Document Type's label set.

  1. This is a mitigated success.
    • The Batch Folder has been assigned the "Envoy" Document Type.
  2. It indeed should have been classified so. It is an invoice from Envoy Imaging Systems.
  3. However, it's a mitigated success in that its similarity score is only 84%.
    • That means only 84% of the labels located on this document match the label set for the "Envoy" Document Type.
  4. In this case, this is due to poor OCR data. While some labels may be present on the document, their OCR data is too garbled to match the label in the label set.
    • For example, the label Invoice was not matched because the text was OCR'd as "nvoice".
    • But a win is a win! Part of the reason Labelset-Based can be an effective classification method is you can miss a few labels due to poor OCR and still end up classifying the document appropriately. It is the set as a whole which determines similarity. As long as the document is more similar to the correct Document Type than any of the other Document Types, Grooper has made the right classification decision.

  1. This is a mitigated failure.
    • The Batch Folder should have been assigned the "Envoy" Document Type but it was unclassified.
  2. This is due to its similarity to the "Envoy" Document Type's Label Set falling bellow 60%.
    • 60% is the default Minimum Similarity for this Content Model. If a Batch Folder fails to achieve a similarity score above 60%, it will remain unclassified, as is the case here.
    • But that's so close! It just fell short in terms of similarity between matched labels and the "Envoy" Document Type's Label Set.
  3. In this case, several of the labels for the Data Elements of our Data Model are smudged on the document. OCR was unable to return these portions of the document. Therefore, the label's were not matched.
  4. Remember we collect one label per Data Element. However, there's all kinds of labels on this document for data we don't necessarily care about. Do we have a Data Field for the "Salesperson ID" field on this invoice? No, it's not data we're choosing to collect.
    • But just because we don't have a Data Field for it doesn't mean it's not a useful label for classification. We will look at how to create custom labels for classification purposes in the next section, Common Problems and Solutions.

  1. This is also a mitigated failure.
    • The Batch Folder should have been assigned a "Stuff and Things" Document Type but it was unclassified.
  2. This is a variation of an invoice from the vendor "Stuff and Things"
  3. You may notice the "Stuff and Things" Document Type does not appear at any similarity in our similarity list.
  4. That's because there isn't a "Stuff and Things" Document Type yet. We need to add one and collect labels for it.
    • This is fairly common with a Labelset-Based approach to classification (and indeed the use of Label Sets in general). It often has its most utility in situations where you have a lot of variants of one particular kind of document. The general idea is to use Label Sets to distinguish between the variants by creating one Document Type for each variant, each with their own unique Label Set.
    • Such is the case with invoices. There's lots of different invoice formats, often unique to each vendor. When you get one in a Batch you haven't seen before, you will need to add a new Document Type to account for the new variant. However, as we will see in the next section, onboarding new Document Types with Label Sets is relatively quick and painless.

  1. This is a more severe version of the failure seen in the previous example.
    • The Batch Folder should have been assigned a "Standard" Document Type but it was assigned the wrong Document Type, the "Rechnung" Document Type.
  2. However, we don't have a "Standard" Document Type yet. Just like the previous example, we will need to add one and collect labels for it.
  3. The only think we will need to watch out for is making sure once we do add a Document Type for the invoices from Standard Products, it classifies more confidently than the "Rechnung" Document Type, beating out its similarity score and receiving the "Standard" Document Type.

  1. This is a complete failure.
    • The Batch Folder should have been assigned the "Envoy" Document Type but it was unclassified.
  2. The document is of poor enough quality to get near unusable OCR results.
  3. This resulted in a paltry similarity score of 49%.

What can we do about this?

Sometimes you have to know when to stop. Will it be worth it to reconfigure your Content Model and Label Sets to force Grooper to classify this document in one way or another? Probably not. This is more likely than not an extreme outlier, not representative of the larger document set. It may be easier to kick this document (and other outliers) out to human review, especially if reconfiguring the Content Model is going to negatively impact results in other ways.

You have to know when to leave well enough alone. Outliers like this are a good example of when to do just that.


Common problems and solutions

Custom labels to boost similarity

  1. In the above tutorial, we saw this document failed to classify correctly.
    • The Batch Folder should have been assigned the "Envoy" Document Type but it was unclassified.
  2. This is due to its similarity to the "Envoy" Document Type's Label Set falling bellow 60%.
    • 60% is the default Minimum Similarity for this Content Model. If a Batch Folder fails to achieve a similarity score above 60%, it will remain unclassified, as is the case here.
    • But that's so close! It just fell short in terms of similarity between matched labels and the "Envoy" Document Type's Label Set.
  3. In this case, several of the labels for the Data Elements of our Data Model are smudged on the document. OCR was unable to return these portions of the document. Therefore, the label's were not matched.
  4. Remember we collect one label per Data Element. However, there's all kinds of labels on this document for data we don't necessarily care about. Do we have a Data Field for the "Salesperson ID" field on this invoice? No, it's not data we're choosing to collect.

Just because we don't have a Data Field for it doesn't mean it's not a useful label for classification. Even though we don't need to extract the salesperson's identification number, the fact that label "Salesperson ID" is present on these invoices could be important. It's another feature that makes up the "Envoy" Document Type. We just need a way of telling Grooper to use this label for classification, even though we can ignore it when it comes time to extract data from these documents.

That is one of the reasons for adding custom labels to a Document Type's Label Set.

  1. To add a custom label, first navigate to the "Labels" tab of the Content Model.
  2. Either:
    1. Select a document folder in the Batch selector of the desired Document Type.
    2. Or assign a Document Type by right clicking the document folder and going through the process of assigning the correct Document Type.
      • In our case we want to add a custom label to the "Envoy" Document Type. We have selected the document folder in the Batch and assigned it the "Envoy" Document Type.

  1. Make sure the document folder is selected.
  2. Select a Data Element from the Data Model to which you wish to add the custom label.
    • Most commonly, when adding a custom label for classification purposes, you'll just want to add it to the Data Model root itself, as we've selected here.
  3. Click the "Add Label" icon.
  4. Give your custom label a name and click "Add Custom".

  1. Adding the custom label will add a new label tab named whatever you named it.
    • In this case "Salesperson ID".
  2. Using the text editor, collect the label (either typing it in or lassoing or double clicking it in the document viewer).
    • In this case, Salesperson ID

Now that this label is in the Label Set, it will be considered a label during classification. The label's there. It's part of the document, whether we're extracting the value or not. We "tell" Grooper labels like these should be considered features for classification by creating custom labels.

FYI

You can add as many custom labels as you want.

Indeed, you may want multiple custom labels, adding more label features that distinguish one Document Type to another. To add multiple custom labels, just repeat the process described above, right-clicking the label tabs and adding a new custom label for each label you want to collect.

When we re-classify this Batch, we will see some different results.

  1. Navigate to the "Classify" Batch Process Step in your node tree and re-classify the documents.
  2. Click on the "Classification Tester" tab.
  3. Notice this document now classifies correctly as an "Envoy" Document Type!
  4. Before we added the custom label, this only achieved a similarity score of 59%, falling short of the 60% minimum similarity threshold. Now, it scores a 63% similarity.
    • With another label added to the Label Set, there's more context to what comprises this Document Type.
    • And that's with just one custom label added. There are tons more labels we could collect as custom labels on the document, likely further increasing the similarity score.

Adding new Document Types

The Labelset-Based classification method makes some assumptions about your document processing approach. It shines with structured and semi-structured forms. Labels, more or less, "stay put" on these kinds of documents. You'll see the same field labels over and over again even though the field values will change from document to document. This presumes your Document Types will be very regular (or rigid, with one Label Set very specifically corresponding to one Document Type). If you encounter a new form or variant of an existing form, you likely will need to account for it with a new Document Type.

  1. Such is the case for this document we encountered in the previous tutorial.
  2. The document is unclassified because it doesn't match any of the Label Sets for the existing Document Types.
    • More specifically, its similarity score to the existing Document Types does not meet the 60% minimum similarity threshold for this Content Model.
  3. This should be a "Stuff and Things" Document Type, but we don't have one yet. We need to add it and collect its Label Set to correctly classify the document.

Luckily, the process of adding new Document Types and defining their label sets is quick and painless and actually can become easier the more Document Types you add to the Content Model.

You can do the whole thing in the "Labels" tab of the Content Model.

  1. Navigate to the "Labels" tab in the Content Model.
  2. Select the unclassified document folder for which you want to create a new Document Type.
  3. Click the "+" (plus sign) button at the top.

  1. When the "Add Document Type" window pops up, name the Document Type whatever you like.
    • In our case we named it "Stuff and Things", for the invoice we want the Document Type to apply to.
  2. Click the "OK" button to finish and add the Document Type.

  1. This will add the Document Type to the Content Model
  2. It will also assign the Document Type to the selected document folder in the test Batch.
  3. Collect labels for the document as discussed in the Collect Labels section of this article.

That's it! You've added a new Document Type and collected its Label Set.

  • Keep in mind, as you add new Document Types to the Content Model you will want to perform regression testing to ensure your classification model is still accurate.

Document Types sharing similar labels

As you keep adding more and more Document Types to the Content Model, you will inevitably keep adding more and more labels for the Data Elements in your Data Model. Eventually, you will come across a new document variant that shares a lot of similarity with an already existing Document Type.

  1. Such was the case with these three documents. They were confidently classified as "Rechnung" Document Types.
  2. Their similarity is 99%.
  3. However, these aren't invoices from the vendor Rechnung, they are from the vendor Standard Products.
    • They simply share a lot of the same labels. Interestingly, this "problem" is actually going to end up making our job even easier when adding the new Document Type.

This is where the label auto-map functionality comes in handy.

  1. Add the new Document Type
  2. Assign the right document folder (whose labels you want to collect) the new Document Type.
  3. Click the "Auto-Map" button.
  4. In this case, Grooper found all of the labels present on another document except for the "Line Items" Data Model header label. This is the only different label between the Standard Products and Rechnung invoices.

  1. Click the "Rubberband Label" button at the top.
  2. Making sure your cursor is on the "Line Items" label, go ahead and draw a box around the header label for this table on the document.
  3. The header label should be collected for the Data Table object.

  1. If we go back and reclassify the documents now, we encounter an issue. The three Standard Products invoices aren't being classified at all!
  2. This is because this invoice is coming in at a 99% for both Rechnung and Standard Document Types.

You might be wondering, if we collected the header label for the Data Table on the Standard Products invoice, and that was for sure different than Rechnung, why are they coming up with the same similarity score?

  1. Go to the Data Table object in the node tree.
  2. Make sure you're on the "Data Table" tab.
  3. Expand out the Row Count Range property.
  4. Change the Minimum to 1.

With the Minimum Row Count Range set to "(none)" or zero, we were essentially telling Grooper that there may or may not be a table on the document. A table cannot have zero rows. Since we're telling Grooper that the table is not reliable to the Document Type it will not use it for classification purposes.

If we set the Minimum to 1, we are telling Grooper to expect a table on every document. Now that it is a reliable indicator of a Document Type Grooper will use the table (and table headers) in classification.

If you have multiple Data Table objects in your Data Model, you will need to repeat these steps for each one.

  1. If we go reclassify the documents again, we can see that the documents are now being classified appropriately as the "Standard" Document Type.
  2. The similarities are no longer the same and have a wide enough range that there is no confusion on how this document should be classified.

Volatile labels

Sometimes, you will collect a label you do not want to use for classification purposes. Most often, this is because the label may or may not be present depending on the document.

For example, some of these invoices from Standard Products have the sales tax totaled on the document. However, some do not.

This is called a "Volatile" label. Its presence on a document is unpredictable. Sometimes it's there. Sometimes it's not. It's an optional piece of information. However, because it's optional (or "volatile") we don't actually want to include this as a label for classification. It's going to decrease the similarity score for documents who do not contain the label.

  1. For example, the selected document here does not have the tax listed on the document.
  2. Since that label is not present, its similarity is lower than if it were present.
    • It drops from 100% to 98% in this case. Now, this may not be a critical drop in similarity for this case, but very well could be for others depending on their OCR quality or presence of multiple volatile labels.

You can indicate these kinds of labels are "volatile" and should not be considered for classification. Whether it's there or not, Grooper will not include it as a feature to measure the similarity between an unclassified document and the Document Type.

  1. To do this, navigate to the "Labels" tab of the Content Model.
  2. Click on the icon to the left of the collected label on the Data Element whose label you wish to turn volatile.
    • In our case, we wish to make the "Tax" Data Field's label volatile. As we've seen, sometimes its present on the document and sometimes it's not.

  1. When the "Label Properties" window pops up, change the Volatile property from False to True.
  2. Click "SAVE" to save your changes.

  1. Now, when we classify this document folder...
  2. ...even though the sales tax label is not present on the document...
  3. ...its similarity is 100%!
    • With the label, Tax set as a volatile label, it is no longer considered during the similarity calculation. With it missing from the document, it no longer negatively impacts the similarity score.

How to use label sets for field based extraction

Label sets and the Labeled Value Extractor Type

Intro to The Labeled Value extractor

For most static field based extraction, the Labeling Behavior leverages the Labeled Value Extractor Type. Let's first briefly examine how Labeled Value works outside of the Labeling Behavior functionality.

As the name implies, Labeled Value extractor is designed to return labeled values. A common feature of structured forms is to divide information across a series of fields. But it's not as if you just have a bunch of data randomly strewn throughout the document. Typically, the field's value will be identified by some kind of label. These labels provide the critical context to what the data refers to.

Labeled Value relies on the spatial relationship between the label and the value. Most often labels and their corresponding values are aligned in one of two ways.

1. The value will be to the right of the label.

2. The value will be below the label.

Labeled Value uses two extractors itself, one to find the label and another for the value. If the two extractors results are aligned horizontally or vertically within a certain amount of space (according to how the Labeled Value extractor is configured), the value's result is returned.

  1. For example, we could configure this "Invoice Number" Data Field to utilize the Labeled Value extractor to return the invoice number on the document.
    • Keep in mind this is the "hard" way of doing things. As we will see, the Labeling Behavior will make this process easier.
  2. We've set the Value Extractor to Labeled Value
  3. The label is returned by the Label Extractor
    • Here, set to a Pattern Match extractor using the regex pattern Invoice Number
  4. The value is returned by the Value Extractor
    • Here, set to a Pattern Match extractor using the regex pattern [A-Z]{2}[0-9]{6}
  5. The Maximum Distance property is used to determine alignment relationship between the label and the value as well as the maximum distance between the label and value.
    • The default settings are used here, indicating the value can be aligned horizontally, up to 2 inches from the right of the label, or it can be aligned vertically, up to 2 inches below the label.

  1. Upon execution, the Label Extractor first finds the label, then looks to see if anything matching the Value Extractor is located according to its layout configuration.
    • Sure enough, there is a result, "IN165798".
  2. The Value Extractor's result is collected for the Data Field upon running the Extract activity.

However, the Labeled Value extractor's set up is a little different when combining it with the Labeling Behavior. The end result is a simpler configuration, utilizing collected labels for the Label Extractor.

Label sets and Labeled Value

Since this Content Model utilizes the Labeling Behavior, at least part of the setup described in the previous tab was unnecessary. If you've collected a label for the Data Field and that Data Field's Value Extractor is set to Labeled Value, there is no need to configure a Label Extractor. Instead, Grooper will pass through the collected label to the Labeled Value extractor.

  1. For example, we've already collected a label for the "Invoice Number" Data Field for the "Factura" Document Type.
  2. The label Invoice Number is returned on the document for the label identifying the document's invoice number.

  1. With the label collected, the set up for this "Invoice Number" Data Field will be much simpler.
  2. Notice the Value Extractor has been set to Labeled Value.
  3. The Label Extractor and Value Extractor are unconfigured (or "blank").

  1. However, upon testing extraction, the invoice number is collected.
    • All that was required, in this case was to collect the label and set the Data Field's Value Extractor property to Labeled Value. Magic!
    • Not magic. Label sets.
  2. With Labeling Behavior enabled and a label collected for the "Invoice Number" Data Field, the Labeled Value extractor's Label Extractor looks for a match for the collected label.
    • In this case Invoice Number.
  3. Furthermore, with Labeling Behavior enabled and a collected label utilized as the Label Extractor, the Labeled Value extractor's Value Extractor will still return a value even if left unconfigured.
    • It will look for the nearest simple segment according to the layout settings (the Maximum Distance and Maximum Noise property).
    • The result "IN165796" is indeed the nearest simple segment and the desired result. So, there is technically nothing else we need to do. However, situations are rarely this simple and straightforward. There are some other considerations we should keep in mind.

While you can get a result without configuring the Labeled Value extractor's Value Extractor, that doesn't mean you should.

It is considered best practice to always configure the Value Extractor.

Best practice considerations

While you can get a result without configuring the Labeled Value extractor's Value Extractor, that doesn't mean you should. It is considered best practice to always configure the Value Extractor.

So, why is it considered best practice to do so. The short answer is to increase the accuracy of your data extraction. A simple segment could be anything. If you know the data you're trying to extract has a certain pattern to it, you should target that data according to its pattern. Dates, for example, follow a few different patterns. Maybe it's "07/20/1969" or "07-20-69" or "July 20, 1969", but you know it's a date because it has a specific syntax or pattern to it. To increase the accuracy of your extraction, you should configure the Value Reader with an extractor that returns the kind of data you're attempting to return.

We can see fairly quickly why leaving the Labeled Value extractor's Value Extractor unconfigured is not ideal.

  1. All the Data Fields in this Data Section have collected labels and are using the Labeled Value extractor.
    • Except the "Vendor Name" Data Field. Ignore this Data Field for the time being.
  2. We only get a few accurate results.
    • Without its Value Extractor configured, the 'Labeled Value extractor is going to grab whatever segment it can get. While it can be what you want, it is not necessarily what you want.
      • The Value Extractor will allow you to target more specifically what you want to return.
    • Furthermore, while the "Sales Tax" and "Invoice Amount" results may look accurate, they too are not. There are some OCR errors. The extracted segments "0,00" and "54.594.00" should be returned as "0.00" and "54,594.00".
      • The Value Extractor will also allow you to utilize Fuzzy RegEx, Lexicon lookups, output formatting, Data Type Collation methods and other extractor functionalities to manipulate, format, and filter results.
  3. For example, the "Date" Data Field returns the segment "Page" to the right of the label Date where it should be returning the date below it, "Feb 26, 2014".
    • If we were instead to configure the Labeled Value extractor's Value Extractor to only return dates, we'd get the more specific result we want and not the generic segment we don't.
    • FYI: When the Value Extractor property is left unconfigured in this manner, the Labeled Value extractor follows a "horizontal then vertical" order of operations. If both a Right Maximum Distance and a Bottom Maximum Distance are configured, it will look for results to the right of the label (aligned horizontally) before looking for results below the label (aligned vertically).

  1. If we reconfigure this "Invoice Date" Data Field slightly we will get a much more accurate result.
  2. We've kept the Data Field's Value Extractor set to Labeled Value.
  3. The only thing we've changed is we've set the Labeled Value extractor's Value Extractor to a Reference extractor pointing to a Data Type returning dates.

  1. Upon testing extraction, we can see now the Data Field collects the value we want, the invoice's date "02/26/2014"
  2. By configuring the Labeled Value extractor's Value Extractor, it's no longer looking for just simple segments next to the label. So, the word "Page" is no longer returned. Instead, it's looking for results matching the Value Extractor's results.
    • This increases the specificity of what the Labeled Value returns. Increased specificity yields increased accuracy.

Configuring the Labeled Value extractor's Value Extractor also gives you the myriad of functionalities available to extractors. For example, Fuzzy RegEx is one of the main ways Grooper gets around poor OCR data at the time of extraction. When the text data is just a couple characters off of the extractor's regex pattern, Fuzzy RegEx can not only match the imperfect data but "swap" the wrong characters for the right ones, effectively cleansing your result.

  1. Take the "Invoice Amount" Data Field for example.
  2. Here, the Data Field's Value Extractor is set to Labeled Value.
  3. And, the Labeled Value extractor's Value Extractor is left unconfigured.

  1. The Labeled Value extractor first locates the collected label Amount Due and without a configured Value Extractor returns the nearest text segment (according to the Maximum Distance settings).
  2. This is almost the result we want.
    • It's the "right" result in that, yes, that is the text segment that corresponds to the invoice amount due for this invoice.
    • But it's very much the wrong result in that the OCR text data is inaccurate. "54.954.00" is not a valid currency value. It should be "54,954.00" with the first period being a comma.

However, that's just a single character off from being the right result. We could build an extractor to return currency values looking to make fuzzy swaps like this, both matching text that is slightly off and reformatting the result to match a valid currency format. If we used that extractor as the Labeled Value extractor's Value Extractor it would not only find the segment but also reformat the result, swapping the mis-OCR'd period for what it should be, a comma.

And we've done just that.

  1. Here, we've set the Labeled Value extractor's Value Extractor to reference a Data Type returning fuzzy matched currency values.

  1. The Value Extractor matches the text we want, below the label Amount Due
  2. And since the referenced extractor uses Fuzzy RegEx the returned result is now a valid currency value.
    • The result is now "54,594.00" instead of "54.594.00". The first period was swapped for a comma.

Using Static labels for Data Field extraction

Collecting Static labels

The Data Field elements have a unique label option, the Static label. This label option is useful for situations where the label itself is what you want to extract.

  1. For example, we have a Data Field in this Content Model's Data Model to collect the vendor's name for the invoice.
    • However, there isn't necessarily a label for the vendor's name like there is for other data points on the document.
  2. The purchase order number has a label, "PO Number", pointing to that data, "PO0009845".
  3. However, there is no such label for this invoice document's vendor name, "Factura Technology Corp".
    • But that's the data we want. The name itself. If these invoices are always classified as "Factura" Document Types, they're always going to have this text, "Factura Technology Corp". That's the vendor's name, and that's the data we want.

What we really want to do is collect a piece of information that is the same for every single document of one Document Type. We expect the vendor's name "Factura Technology Corp" to be present for every document assigned the "Factura" Document Type during classification. Furthermore, we always expect it to be "Factura Technology Corp" and not something else.

Therefore, the vendor's name is "static" for the Document Type. It's present on every Document Type and the same value for every Document Type. You know what else is static on structured and semi-structured forms? Labels! Just in this case the label "Factura Technology Corp" is itself the value we want to return.

This is what a Static label is for.

  1. With your cursor set in the desired Data Field, Click the "Add Label" button at the top of the tab.
    • FYI: Only the Data Field Data Element has the option for a Static label.
  2. When the option comes up, click "Add Static".

  1. Collect the the label for the new "Static" label.
  2. The desired result should get returned without needing a "label" or a "value".

Returning the Static label

Now that the Static label is collected, how does Grooper know to return it during extraction when the Extract activity runs? The short answer is the Labeled Value extractor type will do this for us.

With "Factura Technology Corp" collected as a Static label, and the "Vendor Name" Data Field configured to utilize the Labeled Value extractor, it will return the Static label itself as the result.

  1. Here, we have the "Vendor Name" Data Field selected in the Node Tree.
  2. The Data Field's Value Extractor property is set to use the Labeled Value extractor type.
  3. The Labeled Value extractor's Label Extractor and Value Extractor are both unconfigured.

  1. Go to the "Tester" tab.
  2. Click the play button to test the extraction.
  3. With this Labeled Value configuration, and a Static label collected for this Data Field, the Static label is itself what the extractor is looking for on the document.
  4. If present, it will be returned and collected at time of extraction when the Extract activity runs.

How to use label sets for tabular extraction

Label sets and the Tabular Layout method

Label sets and Tabular Layout

Many tables label the columns so the reader knows what the data in that column corresponds to. How do you know the unit price for an item on an invoice? Typically, that item is in a table and one of the columns of that table is labeled "Unit Price" or something similar. Once you read the labels for each column (also called "column headers"), you the reader know where the table begins (below the column headers) and can identify the data in each row (by understanding what the column headers refer to).

This is also the basic idea behind the Tabular Layout Extraction Method. It too utilizes column header labels to "read" tables on documents, or at least as the step number one in modeling the table's structure so that Grooper can extract data from each cell in the table.

Furthermore, using the Tabular Layout method, collected label sets using a Labeling Behavior can also be used to extract data from tables on documents. In this case, the labels collected for the Data Column children of a Data Table are utilized to help model the table's structure.

Once the column header locations are established, the next requirement is a way to understand how many rows are in the table. This is done by configuring at least one Data Column's Value Extractor property. Generally, there is at least one column in a table that is always present for every row in the table. If you can use an extractor to locate that data below its corresponding column header, that gives you a way of finding each row in the table.

And last there are a few other considerations you might need to make. Is every row in the table a single line or are the rows "multiline"? Do you need to clean up the data the Tabular Layout initially extracts for a column by normalizing it with an extractor? Do you need to establish a table "footer" to limit the number of rows extracted?

This tutorial will cover the basic configuration of the Tabular Layout Extraction Method using collected Label Sets and address a few of these considerations.

The basic steps will be as follows:

  1. Go to your Data Table in the node tree.
  2. Set the Extract Method property to Tabular Layout.

  1. Collect labels.
    • At minimum you must collect a header label for each Data Column child in the Data Table. We will also discus the benefits of collecting label for the full header row.
  2. Assign a Value Extractor for at least one Data Column.
    • We always expect to find a quantity for each line item in the invoice. There's always a "Quantity" column. This data is also present on every row. This will provide the information necessary to find each row in the table.
    • We will also discus why you might configure the Value Extractor property on additional Data Columns as well.
  3. Test to ensure the table's data is collected.

In a perfect world, you're done at that point. As you can see in this example, we've populated a table. Data is collected for all four Data Columns for each row on the document.

However, the world is rarely perfect. We will discuss some further configuration considerations to help you get the most out of this table extraction method in the "Additional Considerations" section below.

Collect Labels

See the above how to (Collect Label Sets) for a full explanation of how to collect labels for Document Types in a Content Model. The following tutorial will presume you have general familiarity with collecting labels.

As far as strict requirements for collecting labels for tabular data extraction goes, you must at minimum collect a label for each Data Column you wish to extract.

For this "Stuff and Things" Document Type, one column header label has been collected for each of the four Data Column children of the "Line Items" Data Table.

  1. The label Quantity for the "Quantity" Data Column
  2. The label Description for the "Description" Data Column
  3. The label Unit Price for the "Unit Price" Data Column
  4. The label Total for the "Line Total" Data Column

You may optionally collect a label for the entire row of column header labels. This label is collected for the parent Data Table object's label.

  1. The label Quantity Item Serial Number Description Unit Price Total for the "Line Items" Data Table

It is generally considered best practice to capture a header row label for the Data Table. But if it's optional, why do it? What is the benefit of this label?

The answer has to do with imperfect OCR text data and Fuzzy RegEx. Fuzzy RegEx provides a way for regular expression patterns to match in Grooper when the text data doesn't strictly match the pattern. The difference between the regex pattern Grooper and the character string "Gro0per" is just off by a single character. An OCR engine misreading an "o" character for a zero is not uncommon by any means, but a standard regex pattern of Grooper will not match the string "Gro0per". The pattern expects there to be an "o" where there is a zero.

Using Fuzzy RegEx instead of regular regex, Grooper will evaluate the difference between the regex pattern and the string. If it's similar enough (if it falls within a percentage similarity threshold) Grooper will return it as a match.

  • FYI "similarity" may also be referred to as "confidence" when evaluating (or scoring) fuzzy match results. Grooper is more or less confident the result matches the regex pattern based on the fuzzy regex similarity between the pattern and the imperfect text data. A similarity of 90% and a confidence score of 90% are functionally the same thing (One could argue there is a difference between these two terms when Fuzzy Match Weightings come into play, but that's a whole different topic. And you may encounter Grooper users who use the terms "similarity" and "confidence" interchangeably regardless. Visit the Fuzzy RegEx article if you would like to learn more).

So how does this apply to the Data Table's column header row label? The short answer is it provides a way to increase the accuracy of Data Column column header labels by "boosting" the similarity of the label to imperfect OCR results.

  1. For example, examine the collected label for the "Description" Data Column.
    • Notice the label Description is highlighted red. The label doesn't match the text on the document.
  2. This is due to imperfect OCR results.
    • The label should read "Description" but OCR made some missteps and recognized that segment as "DescripUon".
    • The "ti" in "Description" were recognized as a capital "U". This means "Description" is two characters different from "Description" or roughly 82% similar. The Labeling Behavior's similarity threshold is set to 90% for this Content Model. 81% is less than 90%. So, the result is thrown out.
      • FYI, this threshold is configured when the Labeling Behavior is added using the Behaviors property of a Content Model. The Label Similarity property is set to 90% by default, but can be adjusted at any time by configuring the Labeling Behavior item in the Behaviors list.

As we will see, capturing the full row of column header labels will boost the similarity, allowing the label to match without altering the Label Behavior's fuzzy match settings.

First, notice what's happened when we lassoed the row of column header labels.

  1. Some of the labels are off. "oty." should read "Qty." and "DescripUon" should read "Description".
  2. It's because that's what's in the document's text. When you lasso a label, it's going to grab whatever OCR text data was generated from the Recognize activity (or native text for digital documents).
  3. And, our "Description" Data Field's label still isn't matching.
    • But keep your eye on the birdie.

  1. Notice what happens when we spell-correct the lassoed label, typing "Qty." instead of "oty." and "Description" instead of "DescripUon".
  2. Now the label matches. MAGIC!

Not magic. Just math.

The Data Table's column header row label is much much longer than a single Data Column's column header label. There are just more characters in "Qty. Qty. Item Number Description Unit Price Extended Price\r\nOrd. Shp." than "Description" (70 vs 11). Where the "Description" Data Column's label is roughly 82% similar to the text data (9 out of 11 characters), the "Line Item" Data Table's label, comprised of the whole row of column labels, is roughly 96% similar to the text data (67 out of 70 characters).

Utilizing a Data Table label allows you to hijack the whole row's similarity score when a single Data Column does not meet the similarity threshold. If the label can be matched as a part of the larger whole, its confidence score goes up much further than by itself. The Data Table's larger label of the full row of column labels gives extra context to the "Description" Data Column's label, providing more information about what is and is not an appropriate match.

So why is it considered best practice to capture a label for the Data Table? OCR errors are unpredictable. The set of examples you worked with when architecting this solution may have been fairly clean with good OCR reads. That may not always be the case. Capturing a Data Table label for the column label row will act as a safety net to avoid unforeseen problems in the future.

Assign a Data Column's Value Extractor

This step is all about row detection.

So far all we've done is established header column positions on each document. So, Grooper knows where the table "starts". But, that's not where the data is. The table's data is in the rows.

As it stands, Grooper doesn't know anything about the rows in the tables. It doesn't know the size of each row. It doesn't know what kind of data is supposed to be in the rows. Maybe most importantly, it doesn't know how many rows there are. Tables tend to be dynamic. They may have 3 rows on one document and 300 on the next. Grooper needs a way of detecting this.

Indeed, if we were to test extraction with just labels collected, we would not get any result whatsoever.

  1. Go to the "Tester" tab to test your results.
  2. Click the play button to test extraction.
  3. As you can see, we get no extraction results for the "Line Item" Data Table.

This is why we need a Data Column's Value Extractor property configured, to give the Extract activity an awareness of the rows beneath the column labels.

The key thing to keep in mind is this data must be present on every row. You'll want to pick a column whos data is always present for every row, where it would be considered invalid if the information wasn't in that cell for a given row.

In our case, we will choose the "Quantity" Data Column. We always expect there to be a quantity listed for the line item on the invoice, even if that quantity is just "1".

  1. We will select the "Quantity" Data Column in the Node Tree.
  2. We will configure the Value Extractor to return the numerical quantity listed for every line item on every row of the table.
    • We will keep this fairly simple for demonstration purposes, using a Pattern Match extractor.

This is the pattern we will use for the "Quantity" Data Column's Value Extractor.

  1. The regex is a fairly simple pattern to match generic quantities.
    • It'll match one to three digits with an optional decimal followed by zero to four digits. And, that must be surrounded by a space character before and after.
  2. As you can see, we get two results below the "Quantity" label. We should then get two rows when this table extracts.

We get a bunch of other hits as well. This is a very generic extractor matching very generic numerical data.

  1. Will this result present a problem? Will we get an extra row for its result?
    • No. That result is above the label collected for the Data Column. The Tabular Layout method presumes rows are below column labels. Any result above them will be ignored.
  2. What about results like these? Will this present problem?
    • The short answer is no. This result is misaligned with the "Quantity" Data Column's header. It's too far to the right to be considered "under" it and will be ignored as a candidate to produce a row.
    • That said, when you are building your own Data Column extractors, do pay more attention to results below the column header row. They have the most potential to produce false positive results, producing erroneous rows.

For fairly simple table structures we now have the two things the Tabular Layout method needs to extract data.

So far, we have:

  1. Collected labels for the Data Column labels (and optionally the whole row of column labels for the Data Table)
  2. Configured at least one Data Column with its Value Extractor configured.

Now, all we need to do is tell the Data Table object we want to use the Tabular Layout method. We do this by setting its Extract Method property to Tabular Layout.

Set Extract Method to Tabular Layout and test

A Data Table's extraction method is set using the Extract Method property. To enable the Tabular Layout method, do the following.

  1. Select a Data Table object in your Data Model.
    • Here, we've selected the "Line Items" Data Table.
  2. Select the Extract Method property.
  3. Using the dropdown menu, select Tabular Layout

Now, let's test out what we have and see what we get!

  1. Click on the "Tester" tab.
  2. Select the Document in the Batch.
  3. Click the play button to test the extraction.
    • Side note: We've seen before we can test extraction using the "Labels" tab of a Content Model or Document Type when Labeling Behavior is enabled. The only real difference is we're testing extraction for the specific Data Element selected in the Node Tree. In this case the "Line Items" Data Model. The "Test" button in the "Labels" tab will test extraction for the entire Data Model and all its component child Data Elements. However, feel free to test extraction at either location. The end result is the same. We're testing to verify extraction results.
  4. We see that we are getting the information extracted from the document that we want.
  5. Here we can see this part of the table is being extracted.
  6. However, we are also collecting some junk data that we don't want.
  7. We can see that this is being extracted from information outside of the table. Why is that?

For the Tabular Layout method, the Data Table is populated using primarily two pieces of information.

  1. The location and width of the Data Column header labels.
    • This determines the width of the cells for each column.
    • Side note: The width of the column cells is actually determined differently depending on if the table has lines. If the table has lines (as it does in this example) and those lines were previously detected via a Line Detection (or Line Removal) IP Command, the cell width will be expanded to the boundaries of the lines. Table lines give human readers an indicator of where the data "lives" (or is contained). If it's in the box, it belongs to the column. If it's out of the box, it belongs to a different column.
  2. The number of rows as determined by the Data Columns whose Value Extractor property is configured.
    • One row is established for each result the Value Extractor returns.

  1. Here Grooper has found an item on the document that matches the Pattern Match extractor we set for the Quantity column, within the given width of the column.
  2. Since Grooper found a match, it assumes that this is a valid row.

  1. Since we do not have an extractor on the Description column, Grooper simply looks for anything within the column width that is on the detected row, and returns it.

So, how do we fix this? We must give Grooper some parameters for where it is allowed to look for rows to narrow down our results. This can be done by adjusting the Maximum Header Distance.

Tip: Adjusting the Maximum Header distance

Let's adjust the Data Table's Maximum Header Distance to try and improve our results.

  1. Click on the "Data Table" tab.
  2. Expand the sub-properties for the Extract Method property.
  3. Expand the sub-properties for the Row Detection property.
  4. By Default, the Maximum Header Distance property is set to 0%.
  5. If we look down at the bottom in the in-app Grooper Help, we see that when this property is set to 0%, Grooper will look through the whole document to find rows, even if those rows occur above the Header Label of the table.

  1. Let's give this property a value.
    • Generally, 200-300% is a good value to start with.

  1. Click back to the "Tester" tab.
  2. Click the play button to test.
  3. We are now only getting the results we want returned.

So what is Grooper doing? How is it now only finding the information that we want from the document?

Since the Maximum Header Distance now has a value above 0%, Grooper will only look below the header for rows instead of looking at the whole page. Since it found a row within 300% of the height of the Header below the Header line, it will return as a row.

The basic setup is great for simple table structures. It is much easier to set up than previous table extraction methods, such as Header-Value. However, the biggest challenge in table extraction is the variety of ways in which table data can be formatted, organized into complicated row and column structures.

The Tabular Layout method offers a smorgasbord of configurability for more complicated or nuanced table structures. For more information on Tabular Layout's advanced setup, please visit the full #Tabular Layout article.

Label sets and the Row Match method

The Row Match table extraction method uses an extractor to pattern match each row of a table. For each result the extractor returns, the Data Table will collect one row. So, if the extractor returns forty results, you'll end up with a table with forty rows. Data Column results are then populated by filtering the data within each row to the proper column. Commonly, an array based collation method will be used to return the full row, then elements of that array will form the column results.

Label Sets can also be used in conjunction with the Row Match method. The Data Table's Header and/or Footer Labels can be leveraged to narrow where the Row Extractor executes on the document.

For example, take this Closing Disclosure form. This table structure could be targeted using the Tabular Layout method. However, it is more easily targeted and returned using the Row Match Method. The only potential problem is differentiating between the different sections of tables, such as "A. Origination Charges" and "B. Services Borrower Did Not Shop For" and "C. Services Borrower Did Shop For".

The row structure of each of these highlighted tables is similar enough (or identical in the case of the "B" and "C" tables) that a single extractor could easily produce false positive matches.

However, the labels on this document clearly define where each table begins and ends.

How do you the reader know where the "B. Services Borrower Did Not Shop For" table starts? You read the label "B. Services Borrower Did Not Shop For". This is its header label.

How do you the reader know where the "B. Services Borrower Did Not Shop For" table ends? Once you find the label "C. Services Borrower Did Shop For", you know you're looking at a different table. This is the "B" table's footer label.

The Row Match method will utilize a Data Table's collected Header' and Footer labels to define the table's boundaries. The Row Extractor will only return row instances following the Header and/or before the Footer.

Example row extractor

  1. For this example, we will use the Content Model named "Row Match and Label Sets - Model"
    • This Content Model already has its Labeling Behavior enabled.
  2. We have selected, in the Batch Viewer, the Batch named "Row Match and Label Sets - Test Batch"
  3. We will use the Row Match Extract Method to populate this Data Table named "B. Services"
  4. Last, we will use the selected Data Type named "TBL-RM - B. Services Table" as our Row Extractor.

  1. This Data Type extractor uses Ordered Array Collation to return each row instance.
  2. The child extractors, returning each element of the array, will be mapped to their corresponding Data Column when the Row Match method executes.
  3. However, the problem is we're getting too many rows. These are the rows we want to collect.
  4. We do not want to collect these rows. They pertain to a different table's information.

  1. Selecting our Data Table named "B. Services" we can test our extractor's results.
  2. We have assigned the Data Type ("TBL-RM - B. Services") as our Row Extractor.
  3. Everything highlighted here is extraneous, false-positive data.

  1. We could narrow down our results by configuring a Header Extractor and/or Footer Extractor in the Row Match sub-properties.
    • However, in this case, we can do things more simply using Label Sets. Rather than building an extractor, all we need to do is capture a label.

Collecting labels for Row Match

Using Label Sets, you can assign a Header and/or Footer label for a Data Table. The Row Match method will utilize these labels in the place of its Header Extractor and Footer Extractor, respectively.

  1. To collect these labels, we will first go back up to our Content Model
    • The Labeling Behavior is already enabled.
  2. Next, we will navigate to the "Labels" tab.
  3. We will select the Data Table object whose labels we want to collect.
    • Ours is named "B. Services"
  4. First, we will collect the Data Table's Header label.
    • This is where the table "starts". For the Row Match method, no row instances will be collected before this label is encountered on the document.
    • In our case we collected the text B. Services Borrower Did Not Shop For as our Header label.

More importantly, in our case, we need a Footer label. The Footer label will determine where the table "stops" on the document. For the Row Match method, no row instances will be collected after this label is encountered on the document. Any matches from our Row Extractor will be discarded after the collected label.

  • FYI: Truthfully, all we really need in this situation is a Footer label. All our false positive row results come after our true positive results. You don't necessarily have to collect a Header and a Footer label. You can choose to collect one or the other or both to best suit your document's needs.
  1. To collect a Footer label, click the "Add Label" icon at the top of the tab.
  2. Click "Add Footer" when the option pops up.

  1. Collect the Data Table's Footer label.
  2. In our case we collected the text C. Services Borrower Did Shop For as our Footer label.

That's it! That's all you need to do to establish the table's header and footer. There is no need to collect labels for the Data Columns. Collecting labels for Data Columns is only necessary for the Tabular Layout method.

I will repeat. The Row Match method will only utilize the Data Table's labels. If you collect labels for the Data Columns and you're using the Row Match method, they will do nothing as far as table extraction goes.

Enabling label sets for Row Match

The only thing left to do is "tell" the Row Match method you want to use the Header and Footer labels. This is done by enabling the Use Labelset property.

  1. Navigate to the Data Table object in the Node Tree Viewer.
  2. Expand the Row Match sub-properties.
  3. Expand the Options properties.
  4. Change the Use Labelset property from False to True.

Test Results

With the labels collected and the Use Labelset property enabled, our Data Table will properly collect the rows we want from this table.

  1. Upon testing extraction of the selected document folder...
  2. Our Row Extractor collects the desired rows.
  3. Only rows coming after the Header label and before the Footer label are returned.

The Fluid Layout Method

The Fluid Layout table extraction method is designed to switch between the Tabular Layout method and the Row Match method, depending on how a Data Table's labels are configured. So, if you have a varied set of documents where Tabular Layout works well for some Document Types and Row Match works well for other Document Types, you may be able to use Fluid Layout for all of them, avoiding the need for Data Element Overrides.

Label Sets must be collected to use the Fluid Layout method. Each Document Type will use either Tabular Layout or a Row Extractor to collect table data depending on how the labels for a Data Table are collected. Therefore, you cannot utilize the Fluid Layout method without a Labeling Behavior enabled.

The Fluid Layout table extraction method is not only "Label Set aware", it is Label Set dependent.

For example, take these two versions of code descriptions from an EOB form.

Version 1 is clearly a table. It uses the labels "CODE" and "DESCRIPTION" to delineate between each column. The Tabular Layout table extraction method would handily extract this information, returning everything in the "CODE" column to one Data Column and everything in the "DESCRIPTION" column to another..

Version 2 is not exactly a table, but a Data Table could still use the Row Extract method to form a table from this list of information. Each bulleted item in the list could be returned as a table row. The code could be filtered into one Data Column and the description could be filtered into another.

You could not use the Tabular Layout method for this "table". There are no column labels present.

  • There is, however, a header label for the whole table "Code", which will be important for the Fluid Layout method.

So, we have a situation where the Tabular Layout or the Row Match method is preferable, depending on the document's layout. Next, we will review how to configure the Fluid Layout table extraction method to target both table structures.

Collect Labels

The first thing you will want to do is collect labels for your Data Table for each document type. How the labels are collected will determine which table extraction method the Fluid Layout method executes.

  • To execute the Tabular Layout method, the Data Table's Data Column Header labels must be collected.
    • Optionally, you may choose to collect a Header and/or Footer label for the Data Table.
  • To execute the Row Match method (also referred to as the Flow Layout), you must collect the Data Table's Header label. You may NOT collect labels for the Data Table's Data Column labels.
    • This will be how Grooper checks to see which extraction method is used for each Document Type. If Data Column labels are present, the Tabular Layout configuration is used. If no Data Column labels are present, but the Data Table's Header label is present, it will use the Flow Layout (i.e. Row Match) configuration is used.
    • Optionally, you may choose to collect a Footer label for the Data Table.
  1. We will use this Content Model named "The Fluid Layout Method - Model" for this exercise.
    • Its Labeling Behavior has already been enabled.
  2. We have navigated to the "Labels" tab to start collecting labels.
  3. We have selected this Batch named "The Fluid Layout - Test Batch".
  4. Notice we have two Document Types
    • "V1 - Tabular Layout" will correspond to the document whose code description is in a proper table with column headers for the "CODE" and "DESCRIPTION" columns.
    • "V2 - Row Match" will correspond to the document whose code description is in a bulleted list.
  5. The two document folders in the Batch have already been assigned the appropriate Document Type.

For Tabular Layout Document Types...

The "V1" Document Type will utilize the Fluid Layout method's Tabular Layout configuration. To execute the Tabular Layout configuration, much like executing the Tabular Layout table extraction method in general, Data Column labels must be collected.

  1. We have selected the "V1 - Tabular Layout" document folder in the Batch.
  2. The Header labels for the "Code" and "Description" Data Columns have been collected.
    • CODE for the "Code" Data Column
    • DESCRIPTION for the "Description" Data Column

FYI

Just as is the case with the Tabular Layout table extraction method as a "stand alone" extraction method, when used with Fluid Layout, collecting a Data Table's Header label is optional.

That said, it is still generally considered best practice to collect a row of header labels using the Data Table's Header label, if possible.

  1. Here, we've collected a Header Label for the "Code Remarks" Data Table.

For Flow Layout Document Types...

The "V2" Document Type will utilize the Fluid Layout method's Flow Layout configuration. This will utilize the Row Match method to return table data. To execute the Flow Layout configuration ONLY the Data Table's label must be collected.

  1. We have selected the "V2 - Row Match" document folder in the Batch.
  2. The Header label for the "Code Remarks" Data Table has been collected.
  3. DO NOT collect Data Column labels for Document Types you wish to utilize the Row Match method to collect data.

Configure Fluid Layout

Now that the labels are collected for our Document Types we can configure the Fluid Layout extraction method for our Data Table.

  1. Select the Data Table in the Node Tree Viewer.
    • We've selected our "Code Remarks" Data Table.
  2. Select the Extract Method property.
  3. Using the dropdown menu, select Fluid Layout.

Expanding the Fluid Layout sub-properties, you can see there are two Layout configurations.

  1. The Tabular Layout configuration will be applied to Document Types whose Data Column labels have been collected.
  2. The Flow Layout configuration will be applied to Document Types whose Data Column labels have NOT been collected, as long as the Data Table's Header label is present.

By expanding the Tabular Layout and Flow Layout properties, you can see their property panels are identical to the Tabular Layout and Row Match table extraction methods respectively.

  1. The properties you see here are the same set of properties you configure for the Tabular Layout method.
  2. The properties you see here are the same set of properties you configure for the Row Match method.

All that's left is to configure extraction logic for each of the Layouts.

Configure Flow Layout

The Flow Layout configuration extracts table data using the Row Match method. What do you need in order for Row Match to collect table data? A Row Extractor.

  1. In our Local Resources folder, we already have a Data Type that will return rows properly for our "V2" Document Type.
  2. Using the Row Extractor property, we've referenced the aforementioned Data Type.

For our purposes, that's all we need to do. For the "V2 - Row Match" Document Types this extractor will properly return each row and collect each columns data. We have no need to configure any of the other Row Match properties.

Configure Tabular Layout

The Tabular Layout configuration extracts table data using the Tabular Layout method. What do you need in order for Tabular Layout to collect table data? At least one Data Column's Value Extractor must be configured in order to detect each row in the table.

  1. We've selected the "Code" Data Column to configure.
  2. For its Value Extractor we've used a Pattern Match extractor.

  1. The Pattern Match extractor's Value Pattern is set to the regex \w+ and its Prefix Pattern is set to the regex \n.
  2. This will return one result for each row of the "CODE" column, effectively detecting all four rows present.

This is a fairly simple table with only two columns. Just configuring one Data Column's Value Extractor will be sufficient for our needs.

  1. If you need to configure any additional Tabular Layout settings, you can do so by selecting the Data Table in the Node Tree Viewer.
  2. Expand out the Tabular Layout properties and configure them as needed.
    • Again, this is a simple table with simple configuration needs. These default property configurations should be adequate to collect table data for the "V1 - Tabular Layout" Document Types.

Test Extraction

Now that extraction is configured for both the 'Tabular Layout and Flow Layout for our documents, Grooper will switch between the Tabular Layout table extraction method and the Row Match table extraction methods, depending on the Document Type.

For the "V1 - Tabular Layout" Document Type, Data Column labels were collected.

Therefore Grooper extracts the table using the Tabular Layout configuration.

For the "V2 - Row Match" Document Type, only the Data Table's Header label was collected, and no Data Column labels were collected.

Therefore, Grooper extracts the table using the Flow Layout configuration (using the Row Match method).


How To Use label sets for section extraction

There are two Label Set aware Extract Methods for Data Sections.

  1. Transaction Detection
  2. Nested Table

The Transaction Detection method will be most applicable to the majority of use cases wanting to use labels to produce section instances. If you simply want to produce a section starting at a header label and ending at a footer label, the Transaction Detection method is what you want. However, this configuration of Transaction Detection is quite different from how it normally produces sections. We will go over how Transaction Detection establishes section instances both with and without Label Sets.

The Nested Table method is a much more niche section extraction method. It produces section instances using repeating tables, nested within each section. This can be a highly effective way to target sections for certain use cases, such as medical EOB (explanation of benefits) forms.

Label sets and the Transaction Detection method

About Transaction Detection

The Transaction Detection section extraction method is useful for semi-structured documents which have multiple sections which are themselves very structured, repeating the same (or at least very similar) field or table data.

For example, take this monthly tax reporting form from the Oklahoma Tax Commission.

There are five sections of information on this document listed as "A" "B" "C" "D" and "E". Each of these sections collect the exact same set of data:

  1. A "Production Unit Number" assigned to an oil or natural gas well.
  2. A "Purchaser/Producers Report Number"
  3. The "Gross Volume" of oil or natural gas produced
  4. The "Gross Value" dollar amount of oil or natural gas produced
  5. The "Qualifying Tax Rate" ultimately used to calculate the tax due for the well's production.
  6. And so on.

The Transaction Detection method looks for periodic similarity (also referred to as "periodicity") to sub-divide a document into multiple section instances.

For structured information like this, one way you can define where each section starts and stops is just by the patterns in the fields themselves. These values are periodic. They appear at set intervals, marking the boundaries of each section.

For example,

  1. The "Production Unit Number" is always found at the start of the section.
  2. The "Exempt Volume" is always found somewhere in the middle of the section.
  3. The "Petroleum Excise Tax Due" is always found at the end.

The Transaction Detection method detects the periodic patterns in these values to divide up the document into sections, forming one section instance from each periodic pattern of data detected. Part of how the Transaction Detection detects these patterns is by using extractors configured in the Data Section's child Data Field objects. These are called Binding Fields.

Grooper uses the results matched by these Data Fields to detect each periodic section. For example, you might have a "Production Unit Number" Data Field for these section that returns five results, one for each section. Once these five results are established, Grooper will look for other patterns around these results to establish the boundaries of each of the five sections.

The Transaction Detection method also analyzes a document's text line-by-line looking for repeated lines that are highly similar to each other.

For example, each of the yellow highlighted lines are extremely similar. They are essentially identical except for the starting character on each line (either "A" "B" "C" "D" or "E"), this repeated pattern is a good indication that we have a set of repeated (or "periodic") sections of information.

Furthermore, the next lines, highlighted in blue, are also similar as long as you normalize the data a bit. If you replace the specific number with just a "#" symbol, they too are nearly identical.

The Transaction Detection method will further go line-by-line comparing the text on each one to subsequent lines, looking for repeating patterns. Such is the case for the rest of the green highlighted lines. Even accounting for OCR errors, each line is similar enough to detect a pattern. We have 5 sets of very similar lines of text. We have ultimately 5 section instances returned for the Data Section.

Lastly, eventually Grooper will detect a line that does not fit the pattern. The red highlighted line is totally dissimilar from the set of similar lines detected previously. This is where Grooper "knows" to stop. Not fitting the periodic pattern, this marks a stopping point. This text is left out of the last section instance and with no further lines matching the detected periodic pattern, no further section instances are returned.

The Transaction Detection method is not going to work well for every use case. It succeeds best where most of the data in the section is numerical in nature.

It's easy to normalize numeric data. Any individual number can be swapped for a "#" symbol. A currency value in on a line of text one section could be $988,000.00 and $112,433.00 in another but as far as comparing the lines for periodic similarity (also referred to as "periodicity"), they can both be normalized as "$###,###.##". Lexical data tends to be trickier. How do you normalize a name for example? How do you differentiate a name from a field label? You can do it with a variety of extraction techniques, but not using this line-by-line approach to determining how similar one line is to another.

This precisely is why it's called "Transaction" Detection. It works best with transactional data, which tends to be currency, quantity or otherwise numerical values. Indeed, this method was specifically designed for EOB (Explanation of Benefit) from processing and medical provider payment automation, in general.

FYI

What does this have to do with Labeling Behavior and Label Sets?

We're getting there. Ultimately, Transaction Detection is "Label Set aware" and can take advantage of collected Header and Footer labels for a Data Section object. However, collecting labels for the Data Section will quite dramatically change how Transaction Detection works.

It is best to understand how this sectioning method works without Label Sets before we delve into how it works with them.

Configuring Transaction Detection with Binding Fields

Without utilizing Label Sets, the Transaction Detection sectioning method must assign at least one Binding Field in order to detect the periodic similarity among lines of text in a document, ultimately forming the Data Section's section instances.

  1. For this example, we will end up configuring the "Production Info" Data Section of this Data Model.
  2. We will utilize the "Production Unit Number" as the Binding Field.
  3. This Data Field utilizes a simple Pattern Match extractor for its Value Extractor assignment.

  1. The Pattern Match returns the production unit numbers on the document using a simple pattern \d{3}-\d{6}-\d-\d{3}.
  2. Importantly, notice this returns five result candidates (when testing extraction at the Data Field level in the Node Tree).
    • Due to the limited space on the screen, only four instances are here, but should you scroll down further on the Batch Viewer, you would see the fifth result being returned.
    • This will be important because we want to end up creating five section instances. If you expect to return five section instances, your Binding Field's extractor (or Binding Fields extractors if using more than one) will need to return five results.

Next, we will configure a the "Production Info" Data Section to create section instances using the Transaction Detection method.

  1. Select the Data Section in the Node Tree.
  2. Using the Extract Method property, select Transaction Detection.
  3. Click the ellipsis at the end of the Binding Fields property.

  1. Choose which Data Fields in the Data Section should be used as Binding Fields by checking the box next to the Data Field.
    • Here, we have selected the "Production Unit Number" Data Field.
  2. Click "OK".

For this example, all we need to do is assign this single Data Field as a Binding Field. There is enough similarity between the repeating section, that's all we need to do (For more complicated situations you may need multiple binding fields. Just be sure all Binding Fields are present in each section. No "optional" Data Fields for the Binding Fields.

The Transaction Detection method will then go through the line-by-line comparison process around the Binding Fields to detect periodic similarities to produce section instances.

  1. How Grooper goes about detecting these periodic patterns is controlled by the Periodicity Detection set of properties.

  1. In our case, five section instances were established, one for the each result from the "Production Unit Number" Data Field's Value Extractor.

  1. If you need to trouble shoot the Transaction Detection method's results, the "Diagnostics" can give you additional information as to how Grooper detected these repeating patterns in the document's text data. Click the "Diagnostics" button at the top of the tab.

  1. You will find several reports for the Data Section such as the "Execution Log".

Configuring Transaction Detection with label sets

Now that we understand the basics of the Transaction Detection method, we can look at how this sectioning method interacts with the Labeling Behavior. Its behavior is wildly different if a Header label is collected for the Data Section. Assuming you can collect a Header label for the Data Section, it is so different that a Binding Field is not even necessary to produce the section instances.

Establishing the section instances is almost as simple as...

  1. Start the section instance at the 'Header label.
  2. Stop the section instance at the next Header label (or Footer label)
  3. Repeat for every Header label found on the document.

For example, we have collected a Header label for the "Production Info" Data Section here.

  1. To add the label, we've selected the Content Model in the Node Tree.
  2. We've navigated to the "Labels" tab.
  3. We've selected the document in the Batch classified as the "OTC Form 300" Document Type.
    • In other words, this is the Label Set for the "OTC Form 300" Document Type.
  4. We've selected the Data Section in the Data Model. For the Header label, we've captured the first line of field labels.
    • 8. Production Unit Number 9. Purchasers/Producers Report Number 10. Gross Volume 11. Gross Value
  5. Notice we have five hits for this label, one at the start of each section.

Next, we will configure a the "Production Info" Data Section to create section instances using the Transaction Detection method.

  1. Select the Data Section in the Node Tree.
  2. Using the Extract Method property, select Transaction Detection.
  3. Notice no Binding Fields are selected.

  1. Let's go back to the Data Section.
  2. We have set the Extract Method to Transaction Detection.
  3. Note that we have no Binding Fields set on this Data Section.

  1. Here we are getting all five sections being returned!
  2. The section instance starts on the line containing the Header label.
  3. And it ends the line before the next Header label.
    • Then the second section instance starts at the second header and so on.


Label Sets and the Nested Table method


The Nested Table section extraction method to extract was specifically designed for a particular combination of sectional and tabular data found on medical EOB (Explanation of Benefits) forms (However, it may have application in other areas). These documents will often be broken up into sections of claims with information like a claim number and a patient's name followed by a table of information pertaining to services rendered and ending with some kind of footer, usually a total line adding up amounts in the table.

One way you can often identify where these claim sections start and stop are by the table's themselves. Essentially you'll have one table per claim. Or in Grooper's case, one Data Table instance per Data Section instance.

The Nested Table sectioning method takes advantage of these kinds of structures, utilizing a Document Type's Label Set to do so.

The Nested Table method has two hard requirements:

  1. The Data Section...
  2. must have a child Data Table object.
    • Furthermore, The Nested Table method was designed to work best when its child Data Table uses the Tabular Layout method.
    • While it is possible for this Data Table to use any tabular extraction method, the Tabular Layout method is preferred.
  3. The section must have a "footer" able to be captured as the Data Section's Footer label.

Set Data Section's Extract Method to Nested Table

The Nested Table method is a little different in that it is a sectional extraction method but also involves tabular data. Ultimately, both a Data Section object and a Data Table object are required for it to work. However, it is primarily a method of breaking up a document into multiple sections for data extraction purposes. As such, it is a Data Section extraction method.

  1. Select a Data Section in the Node Tree.
  2. Set the Extract Method property to Nested Table.
  3. Using the Table property select the child Data Table to be used to establish the repeated sections in the document.
  4. In our case, we've selected the highlighted child Data Table named "Service Line Info".
    • This Data Table's extraction results, combined with the Data Section's Footer label we will collect label, will form the multiple section instances for the Data Section.
    • The Data Table must be a direct child of the Data Section.

Configure the Data Table

The Data Table should be configured to collect all table rows for the full document. When configuring the Data Table and testing its results, just ensure the table accurately extracts the full document as a single table. The Data Section (using the Nested Table method) will take care of breaking out the rows into the appropriate sections.

It is considered best practice for the child Data Table to use the Tabular Layout method.

The Nested Table method was designed specifically in mind with with a Data Table using the Tabular Layout table extraction method. While technically possible to use other table extraction methods, you will achieve the best results when the Data Table uses the Tabular Layout method.

  1. Select the Data Section's child Data Table.
    • If you have more than one child Data Table, make sure you select the Data Table referenced by the Nested Table method's Table property.
  2. Assign the Data Table's Extract Method and configure its extraction.
    • In our case we have enabled the Tabular Layout method, having already collected labels for the Data Table and its child Data Columns for the "Astro Insurance" Document Type.

  1. Go to the "Tester" tab to test the extraction.
  2. Click the play button to ensure the Data Table collects all rows on the document.
  3. Note, we get one big table since we're testing on the Data Table object of our Data Model's hierarchy.
  4. Even though we technically have three distinct tables on the document in three sections, one for each patient.
    • This is good for now. This is what we want to verify, that we get the correct data populated for every table row. Sectioning out the document and placing each table in its own section will be performed by the parent Data Section using the Nested Table method.

FYI

The general guidance for testing the child Data Table's extraction results is to verify every row on the document is returned as a single table.

However, there are some situations where you may not return every row but still get Nested Table to section the document appropriately and ultimately return each table to each established section.

You may run into this if your table uses a Footer label or extractor. To illustrate this, we added a Footer label for this Data Table, using the text label "TOTALS" at the bottom of each table.

  1. Notice we only return the first three rows for the first table on this document where before we were returning every table row for every table.
  2. That is because the Data Table stopped looking for rows once it encountered the Footer label TOTALS
  3. However, because that footer is present at the end of each table (which will ultimately inform the Nested Table method how to establish sections), when the Data Section executes, creating sections for this document, each subsequent table will actually successfully extract and populate the sections as seen in the subsequent tabs.

To sum up, in general, make sure your Data Table extracts every row for the whole document when testing your Data Table's configuration. However, if you have repeating footers, while you may only be able to verify the first section's table row's populated correctly, you will be able to verify the rest of the document's tables extract correctly whenever you execute the Data Section's extraction.

Add the Data Section's Footer Label

In order for the Nested Table method to properly section out the document, you must assign a Footer label in the Document Type's Label Set for the Data Section. This will give Grooper an awareness of where the section should stop (ultimately allowing another section to start). In our case, we can use the text label "Totals". At the end of every table/section there is this "Totals" line totaling up various columns in the table. Since this label is present at the end of every section, we can collect it as the Data Section's Footer label, which the Nested Table method will then use to establish where each section instance ends.

  1. To collect the Footer label, first navigate to the Content Model in the Node Tree.
  2. Select the "Labels" tab.
  3. Under the Data Section object...
  4. Select the Footer tab.
  5. Collect the label.
    • See the Collect Label Sets section of this article if you need more information on how to collect labels.

The Footer label must be collected for the Data Section for the Nested Table method.

Even if the Data Table object collects and uses a Footer label for its own needs, the Data Section must have a Footer label defined as well (even if it's the exact same label).

Run Global header detection

If we are to test extraction at this point, we will see mixed results. The Data Section will correctly produce the three section instances for this document. However, the tabular data will not be collected.

  1. Select the Data Section in the Node Tree.
  2. Click on the "Tester" tab.
  3. Click the play button to test the extraction. Here, we can verify these results.
  4. The Nested Table method populates three section instances, as desired (you may need to scroll to the end to see the section instance selection).

  1. The Data Field extraction executes flawlessly.
    • Not only do we have the right number of section instances, but their dimensions are correct as well. They fully encapsulate the appropriate text data.
  2. However, the Data Table does not extract anything at all.
  3. We're going to click on the "Inspector" icon at the top.

The "Inspector" tab can be very helpful when troubleshooting extraction in general, but particularly Data Section and Data Table extraction.

  1. Expand the "Claim Section" Data Section result to inspect the individual section instance results.
  2. Select one of the section instances.
  3. You can visually see the dimensions to the right.
    • It is likely hard to see in this image. However, the section will be outlined in a red line.
  4. Below the image you can see all the text data for the selected section instance.

If you were to select each section instance, you could verify at this point all three sections where established successfully and the text data for each table is present. It just wasn't extracted. Why not?

This has to do with where the sections instances are and where the Data Table and Data Column labels are.

  1. This is where the section instances were determined to be on the document.
    • And this is absolutely correct. This is what we wanted to happen.
  2. However, the Data Table and Data Column labels for this Document Type's Label Set fall outside of each of these sections.

This presents a challenge. The Tabular Layout table extraction method relies on these labels in order to extract the tabular data. As is, Grooper can't "see" outside of the section instances. If only Grooper could look up to the table and column labels, the table would extract with ease.

Luckily, there is a way for the Tabular Layout method to do just that, using the Run Global property.

  1. Navigate to the child Data Table object in the Node Tree.
  2. Expand the Tabular Layout properties.
  3. Expand the Header Detection properties.
  4. Change the Run Global property from False to True.

This will allow Grooper to detect Data Table and Data Column labels outside of the section instances. Perfect for what we're trying to do here.

What's going to happen when we test extraction now? Find out in the next tab!

Test for success

With the child Data Table now using global header detection (by setting the Run Global property to True), it can look outside the section instances for the column header labels on the full document. Let's see how our sections extract now and if we get any table data populated.

  1. We're navigating back to the parent Data Section to test extraction.
  2. Upon testing extraction, the Nested Table method establishes the three section instances, as we've seen before. Here we are looking at the third section.
  3. And now, the table data is extracted.

Success! The Run Global property method is extremely beneficial when trying to extract table data from multiple sections. Without it, Tabular Layout would not have any way of referring to the column header labels collected in the Label Set. With this property enabled, Tabular Layout can do something very atypical for sectional data extraction. It can look beyond a section instance's text data and refer to the full document (in this case to locate the Data Table and Data Column labels in the Label Sets).

FYI

You can utilize the Run Global property to this effect when using other Data Section extraction methods as well, not just Nested Table.

However, please note the Run Global property is only available/applicable to Data Tables using the Tabular Layout table extraction method.


Additional information

Label "Layout" options

As we've been collecting labels, you may have noticed the Layout property change from Simple to Tabbed or Wrapped. The Layout property determines how the label's text is presented on the document. The Layout can be one of the six following options:

  • Simple
  • Tabbed
  • Substring
  • Boxed
  • Wrapped
  • Ruled

When collecting labels in the Labels tab, Grooper will automatically detect the appropriate label layout. However, there may be some circumstances where you need to manually select the label's layout. Next, we will describe each of these Layout options.

Simple

The Simple Layout is by far the most common. Most fields on a document will utilize this layout. These labels consist of words that do not cross "segment boundaries", meaning the words themselves are not separated by large amounts of whitespace like tabs or by terminal characters like a colon (as a colon character often marks the end of a label).

  1. We will use this Content Model named "Label Layouts - Model" which already has a Labeling Behavior enabled.
  2. We will use these Data Fields to capture various labels and describe their corresponding Layout specifications.
  3. We will use this Batch named "Label Layouts - Test Batch".
  4. For the "Simple" Data Field we captured this portion of text.
  5. Notice the text here was not returned as a label.

  1. Click the check mark next to the label to open up the "Label Properties" window.
  2. Grooper automatically detected the Layout as Simple.
    • Even thought the words are the same Im a Simple Label, its layout is different. There is a large gap between Simple and Label. The Simple Layout does not permit this. Hence, only the label to the left is returned. Only the label whose words do not cross a segment boundary, like a tab, are returned.
    • For simple fields, this makes sense. You don't expect a single label to cross these segment boundaries. Large gaps in whitespace or terminal characters, like colons, are used to distinguish one label from the next. The Simple Layout utilizes this affect of document structure to toss out labels whose words do cross these boundaries, preventing false positive label matches.

Tabbed

The Tabbed Layout is used for situations where you do want to cross segment boundaries. Think about capturing a table's row of header labels. Often each column's label will be separated by large amounts of whitespace. The Simple Layout would not permit you to capture the table's header but Tabbed will.

  1. For the "Tabbed" Data Field, we captured this portion of text.
    • Notice the large gap between Tabbed and Label. However, the label still matches.
  2. This text also returned, even though it does not have large whitespace gaps between any of its words.
  3. Click the check mark next to the label to open up the "Label Properties" window to check what Label Layout Grooper automatically selected.

  1. Grooper automatically chose the Tabbed Layout option.
    • The Tabbed Layout does not expect words to cross segment boundaries, it merely permits them. That is why Grooper collected both labels. You can think of this Layout as a more permissive version of Simple. Tabbed doesn't care whether there's segment boundaries between labels, whereas Simple does.

Substring

The Substring Layout is intended for circumstances where a label is bookended between other portions of text. In other words, it is a "substring" of a larger string of text.

  1. For the "Substring" Data Field, we captured this portion of text.
  2. This label does not match, even though its text Substring Label: is identical.
    • This is a good example of a substring. There are portions of text before and after it (in both cases the word Value).
  3. Click the check mark next to the label to open up the "Label Properties" window to check what Label Layout Grooper automatically selected.

  1. Grooper automatically detected the layout as Simple.
    • Since the text we captured is sandwiched between other text in the second label, it cannot be returned using the Simple Layout.

This is a situation where we would want to manually assign the label's Layout, if we want to collect substrings as labels.

  1. To do this, we just need to select Substring using the dropdown menu for the Layout property.

  1. We can see that the first simple label we selected is still being returned.
  2. Furthermore, notice the substring is now also returned, using the Substring Layout.
    • Again, you may think of this as a more permissive version of the Simple Layout.

Wrapped

The Wrapped Layout will return labels that wrap full lines of text on a document. So, if a label starts on one line, then continues on one or more lines after, this layout will successfully return it. The Wrapped Layout was also useful when we were collecting table labels for the entire header row. For those tables who had column headers on multiple lines, this layout was most appropriate to return the whole row of column headers.

  1. For the "Wrapped" Data Field, we captured this portion of text.
  2. Click the check mark next to the label to open up the "Label Properties" window to check what Label Layout Grooper automatically selected.

  1. Grooper automatically detected this as a Wrapped label.
    • This is because the label wraps full lines. This line occupies one full line on the document, and wraps occupies the next.

  1. As a side note, if we manually change this Layout to Simple, the label will still return.
  2. Click "SAVE" to see what is returned.

  1. You can see that here we are still getting a match with the label.

The text matches in this case due to the Vertical Wrap property enabled on the Labeling Behavior, which allows the Simple layout to capture stacked labels.

  1. To check this, click on the "Content Model" Tab.
  2. Click on the ellipsis next to the Behaviors property.

  1. Here we can see that the Vertical Wrap property is enabled by default.

  1. We're going to go ahead and disable this property to see how it affects our results.

  1. Now we are getting an error and nothing is being returned for the Wrapped label with the Layout set to Simple. We would have to change the Layout back to Wrapped for this label to be collected.

  1. Here we have collected the label "So does this one".
  2. The first line So does is not stacked on top of the second this one. However, the first line does wrap to the next. This works because Grooper has automatically chosen the Wrapped Layout'.

  1. Here we have changed the Layout to Simple and have Vertical Wrap enabled. We get an error for this label.
  2. This is not returned because the Simple Layout cannot recognize a wrapped label that is not stacked on top of itself.

  1. Here we have collected the last of the wrapped labels and it is being returned. Grooper detected this as a Wrapped Layout.
  2. The Wrapped Layout permits segment boundaries between text segments, such as the tab space between about and this in this portion of text.

Ruled

Lines are used on documents to divide fields, sections, table columns or otherwise distinguish between one piece of information and another. Because of this, it is atypical to find a stacked label with a line between the first and second label. The Simple Layout respects this by preventing labels from returning if a horizontal line falls between any portion of the stacked label.

However, there may be rare circumstances where a horizontal line does fall between portions of the stacked label. In that case, you will want to use the Ruled Layout.

Line location information must be present in the Layout Data in order for Grooper to determine if a line is present. A Line Detection or Line Removal command must have been previously executed by an IP Profile during Image Processing or Recognize to obtain this information.

  1. For the "Ruled" Data Field, we captured this portion of text.
  2. The text currently matches in both cases whether or not there is a horizontal line between any portion of the stacked label.
  3. Click the check mark next to the label to open up the "Label Properties" window to check what Label Layout Grooper automatically selected.

  1. Grooper automatically detected this as a Ruled label.
    • This is because there is a horizontal line between the first line of the stacked label This line and the second line rules. It also gathers the label even if there is not a horizontal line separating the text.

  1. Let's see what happens when we set the Layout to Simple.

  1. With the Layout changed to Simple, only the text without the line between the stacked label will return. Since there is a line present between the first and second line, the label no longer returns.

If you want to use the Ruled Layout option, you must enable the Vertical Wrap property of a Content Model

  1. The Vertical Wrap property is the last configurable property in the Labeling Behavior property panel.

  1. If you attempt to collect a Ruled label without Vertical Wrap enabled or attempt to change a Layout to Ruled without Vertical Wrap enabled, you will get an error.
    • Again, keeping Vertical Wrap enabled is highly preferable. Only disable this property if absolutely necessary. Keeping Vertical Wrap enabled will avoid errors like this.

Boxed

The Boxed Layout is intended to capture labels that wrap inside a box, enclosed in lines on all four sides. You can use this Layout to distinguish between labels that fall inside a box and those that do not when the Vertical Wrap property is disabled.

Line location information must be present in the Layout Data in order for Grooper to determine if a line is present. A Line Detection or Line Removal command must have been previously executed by an IP Profile during Image Processing or Recognize to obtain this information.

  1. For the "Boxed" Data Field, we captured this portion of text.
  2. Both the label inside a box, and the label outside a box match are matched.
  3. Click the check mark next to the label to open up the "Label Properties" window to check what Label Layout Grooper automatically selected.

  1. With the default Labeling Behavior properties, Grooper automatically detected this as a Simple label.

You can differentiate between a label in a box and one outside a box by disabling the Vertical Wrap property.

  1. Here, we've gone into the Labeling Behavior property grid and set the Vertical Wrap property to Disabled
    • CAUTION! If you do this, vertical wrapping will be disabled for all labels and all label Layouts, not just for the Boxed Layout.

  1. Switching back to the Labels tab, our label now does not match at all. With Vertical Wrap disabled, the Simple Layout will not be able to match the stacked labels.

  1. Let's change the Layout to Boxed.

  1. The label wrapped inside a box will match.
  2. The label that is not wrapped inside a box will not match.

If you want to use the Boxed Layout option in the manner described above, you must enable the Constrained Wrap property of a Content Model

  1. The Constrained Wrap property is the next-to-last configurable property in the Labeling Behavior property panel.

If the Constrained Wrap property is disabled and Vertical Wrap is also disabled, you will not be able to return labels inside a box with the Boxed Layout.

Data Element Override Utility

Earlier in this article, we talked about using the Labeled Value Extractor Type without configuring its Value Extractor. Again, it is considered best practice to configure its Value Extractor. However, sometimes data is difficult to pattern match. For example, crafting an extractor to return people or company names can be difficult to craft. It is truly these cases why the option to leave a Labeled Value extractor's Value Extractor unconfigured is an option with Label Sets.

To make the best use of this functionality, Data Element Overrides are typically necessary. Indeed, because the Label Set approach is more templated in nature, Data Element Overrides can be a useful tool to fine tune extraction for one specific Document Type. In this section, we will use the "Purchase Order Number" Data Field of our "Labeling Behavior - Invoices - Model" Content Model to demonstrate this.

Revisiting the Problem

The problem arose due to how the Labeled Value extractor behaves when its Value Extractor is left unconfigured. For some of our invoices, this didn't really present a problem at all.

  1. Here, we have the "Purchase Order Number" Data Field selected in our "Labeling Behavior - Invoices - Model" Content Model.
  2. The Data Field's Value Extractor is set to Labeled Value, as is appropriate to utilize the label collected for each Document Type in their Label Sets.
  3. We have re-set the Labeled Value extractor's Value Extractor. It is unconfigured.

For certain document layouts, this approach works just fine.

  1. Here, we have selected a "Rechnung" Document Type folder in the "Labeling Behavior - Invoices - Test Batch" Batch.
  2. Upon testing extraction...
  3. The correct value appropriately extracts.
    • This is due to the special functionality of the Labeled Value extractor when using Label Sets and leaving its Value Extractor unconfigured. The extractor will return text segments to the right of the Data Field's collected label.

However, this will not be the case for all document layouts, notably those whose labels are stacked vertically on top of their corresponding value.

  1. Here, we have selected a "Factura" Document Type folder.
  2. Upon testing extraction...
  3. We do not get the right value.
    • Without a Value Extractor configured, the Labeled Value extractor will return text segments to the right of the label, which, in this case, is not the right text data.

The Maximum Distance settings for this extractor are set to default. The extractor will return text results a maximum of 2 inches to the right and 2 inches below. So, why isn't it returning the actual purchase order number? It certainly falls within 2 inches below the label.
This is due to the specialized way Labeled Value works without a Value Extractor configured. It will always return text data to the right of the label if any is present within the Right Maximum Distance setting.

However, we can easily get this extractor to return the actual purchase order number. All we have to do is tell it not to look to the right of the label.

  1. If we go back to the "Data Field" tab...
  2. We can expand the Maximum Distance property.
  3. Clear the Right property value (or you could set it to 0 in as well).
    • This will ensure the Labeled Value extractor will only return text below the label, ignoring the text to the right of it.

  1. Go back to the "Tester" tab.
  2. Test extraction.
  3. And we get the value we want.

But, what about our documents that did have the purchase order number laid out to the right of the label?

  1. If we go back to our "Rechnung" Document Type folder...
  2. ..and test extraction...
  3. Now, we get a result we don't want.

Data Element configurations are globally applied to all Document Types which inherit them. In our case, all our Document Types inherit the Content Model's Data Model (and its child Data Elements, such as our "Purchase Order Number" Data Field). Therefore, the changes we make to the "Purchase Order Number" Data Field's extractor will effect all documents of all Document Types. It's simply going to execute as we configure it, regardless which specific Document Type is extracted.

We're really in a situation where we want one Document Type to use one configuration and another Document Type to use a slightly different configuration. This is exactly what "Data Element Overrides" are for.

Data Element Override Basics

Before we get into setting up "Data Element Overrides", we will rewind a bit and set our Labeled Value extractor's Maximum Distance properties back to the default settings.

  1. For the Factura Document Type folder...
  2. ... we are no longer getting the correct value.

  1. If we go back to the "Data Field" tab...
  2. ... we can see the Maximum Distanct has been reset back to its defaults. This is why we are no longer getting the correct results for the Factura Document Type.

What we want to do here is change how these properties as configured for the "Purchase Order Number" Data Field are configured for the "Factura" Document Type and ONLY for the the "Factura" Document Type. "Data Element Overrides" allow us to do this by overriding a Data Element's property settings for a specific Document Type (in our case the "Purchase Order Number" Data Field for the "Factura" Document Type.).

"Data Element Overrides" are configured using the Document Type object to which they will be applied. We will thus configure an override for the "Factura" Document Type.

  1. Select the "Factura" Document Type in the Node Tree Viewer.
  2. Navigate to the "Overrides" tab.
  3. Here you will see all Data Elements the selected Document Type inherits.
    • Important! Overrides are configured in this UI for the Document Type selected in the Node Tree Viewer NOT the Batch Viewer.

  1. Select the Data Element whose properties you wish to override.
    • In our case, we want to change the property configuration of the "Purchase Order Number" Data Field.
  2. What you see here is a duplication of the selected Data Element's property grid.
    • This is how the "Purchase Order Number" Data Field is currently configured. If we navigated back to that object in the Node Tree Viewer, we would see the exact same property configuration as we see here. For example, we can see the Value Extractor property is set to Labeled Value, just as it is on the object itself.

Using the "Property Overrides" UI, any property configuration we edit will only apply to the selected Document Type (in our case, the "Factura" Document Type).

  1. Expand the Labeled Value extractor's sub-properties
  2. Expand the Maximum Distance property's sub-properties.
  3. Clear the Right property's value (or set it to 0 in).

Now the "Purchase Order Number" Data Field will extract using these settings, only for the "Factura" Document Type.

  1. FYI: Any overridden Data Element will have an orange star icon next to it to indicate an override is set.

  1. If you have a "Factura" Document Type folder selected in the selected Test Batch...
  2. You test extraction for the overridden Data Elements, we now we get extraction results using the overridden property configuration.

  1. Going back to the "Purchase Order Number" Data Field we can verify the override only effects the "Factura" Document Type.
  2. Selecting our "Rechnung" Document Type folder...
  3. ...and testing extraction...
  4. We get the result we want.
    • The "Rechnung" Document Type has no overrides configured, and thus uses the Maximum Distance configuration as-is for the Data Field.

Data Element Overrides can be an effective way of fine tuning extraction logic specific to an individual Document Type. Because the Label Set approach is more templated in nature, each Document Type corresponds to one specific format, meaning the document's layout will be consisted for each folder classified as that Document Type. Many users will take advantage of this and leverage Data Element Overrides for various fields on various Document Types, especially when utilizing Label Sets.

There is a shortcut to configuring Data Element Overrides using the "Labels" collection UI, which we will demonstrate in the next tab.

Overrides and the Labels UI

In the previous tab, we taught you the normal way to configure Data Element Overrides for a Document Type. You can configure overrides in this manner whether or not you're using a Labeling Behavior in your Content Model. If you are using a Labeling Behavior, there is a shortcut to edit overrides for a Data Element. You can do it directly from the "Labels" tab, using the same UI you use to collect labels.

  1. Here, we have selected the "Labeling Behavior - Invoices - Model" Content Model.
  2. We have also navigated to the "Labels" tab.
  3. We need an override for the "Envoy" Document Type.
  4. Just like the "Factura" Document Type, these invoices present the purchase order number below the label and not to the right.
  5. To set the override, we need to click the object type icon next to the label.
    • Important! Make sure you have a folder of the desired Document Type selected before you click the icon. Overrides configured in the Labels UI are configured for the Document Type selected in the Batch Viewer. Since you're not manipulating a selected Document Type object in the Node Tree, this is how Grooper "knows" which Document Type's overrides you are editing.

So, we need an override for the "Purchase Order Number" Data Field for the "Envoy" Document Type, which we can do without leaving the Labels UI.

  1. An Overrides window should pop up for you to edit the Data Element's overrides.
  2. Just like before, this is a duplication of the Data Element's property grid. Any adjustments you make to the Data Element will execute only for the Document Type selected. Here we have gone into the Maximum Distance property and deleted out the Right value.
    • For example, we can clear out the Labeled Value extractor's 'Right Maximum Distance property, forcing the Labeled Value to only "look" for text below the label.
    • FYI: You can override ANY property for ANY Data Element. For example, you could use a completely different extractor type for a Data Field for a specified Document Type.
  3. Click "SAVE" to save the override configuration.

  1. Overridden Data Elements will have an orange star next to it in the Labels UI as well.

Furthermore, you can test the override directly from the Labels UI as well. You can actually test extraction for the whole Data Model!

  1. Click the "Test" button to test the Data Model's extraction for the selected document folder in the Batch Viewer (including override settings for any Data Elements).

  1. This will open the "Results" in a new windowed tab. You will be presented with the extraction results for the selected document folder's Data Model.
  2. If there are any Data Elements with overrides, such as our "Purchase Order Number" Data Field, they will extract using the override configuration (just as they will when the document is actually extracted by the Extract activity in a Batch Process).

The Label Match Extractor

About Label Match

The Label Match extractor is extremely similar to the List Match extractor in that it matches one or more items in a defined list. However, it is designed specifically to work with the Labeling Behavior functionality. It will use the fuzzy extraction and vertical and constrained wrapping settings defined on the Content Model if a Labeling Behavior is enabled. This way, you can have a single, unified set of fuzzy match settings for multiple extractors. Rather than configuring these settings, including the confidence score threshold and fuzzy weighting, for multiple extractors, you can configure them just once when enabling the Labeling Behavior and all Label Match extractors will use them.

  • For more information on fuzzy extraction, visit the Fuzzy RegEx article.

For the Label Match extractor to return a result, two conditions must be met.

  1. The document folder must be classified.
    • In other words, it must have a Document Type assigned to it.
  2. That Document Type must have a Labeling Behavior enabled.
    • Either on the Document Type or, more typically, its parent Content Model.

Label Match Example

  1. In this example, a Value Reader is configured to return a small list of field labels on an invoice, using the Label Match Extractor Type.
  2. Label Match is selected as the Extractor Type.

  1. The list is entered in the Local Entries editor (just like you do with the List Match extractor).
    • Or, you can reference a Lexicon of list items using the "Properties" tab.
  2. The Prefix and Suffix Patterns are entered here.
    • ^|[^\w] is the default Prefix Pattern.
    • $|[^\w] is the default Suffix Pattern.
  3. Upon execution, notice some results are returned with a confidence below 100%.
    • This is due to the fuzzy matching settings configured from the Labeling Behavior. The Label Similarity property was set to 90%. Any items in the list with a fuzzy matching similarity score above 90% are returned. Any falling below 90% (for example the list item CALLER:) are not.
    • Note this means changing the Labeling Behavior settings will impact ALL Label Match extractors for the Content Model's Document Types.

Where are these Labeling Behavior settings again?

  1. The Content Model selected here, has enabled a Labeling Behavior.
  2. Labeling Behavior is enabled using the Behaviors property. Click the ellipsis button to bring up the "Behaviors" window.

  1. In this window, you can adjust the Behaviors you have set.
  2. The Label Match extractor will use all the fuzzy extraction and text wrapping settings defined here.