2021:Labeling Behavior (Behavior)

From Grooper Wiki


This article is about an older version of Grooper.

Information may be out of date and UI elements may have changed.

202520232021

A Labeling Behavior is a Content Type Behavior designed to collect and utilize a document's field labels in a variety of ways. This includes functionality for classification, field extraction, table extraction, and section extraction.

The Labeling Behavior functionality allows Grooper users to quickly onboard new Document Types for structured and semi-structured forms, utilizing labels as a thumbprint for classification and data extraction purposes. Once the Labeling Behavior is enabled, labels are identified and collected using the "Labels" tab of Document Types. These "Label Sets" can then be used for the following purposes:

  • Document classification - Using the Labelset-Based Classification Method
  • Field based data extraction - Primarily using the Labeled Value Extractor Type
  • Tabular data extraction - Primarily using a Data Table object's Tabular Layout Extract Method
  • Sectional data extraction - Primarily using a Data Section object's Transaction Detection Extract Method
FYI The Labeling Behavior and its functionality discussed in this article are often referred to as "Label Set Behavior" or simply "Label Sets".

You may download and import the file below into your own Grooper environment (version 2021). This contains the Batch(es) with the example document(s) discussed in this article and the Content Model(s) configured according to the How To section's instructions.

About

Labels serve an important function on documents. They give the reader critical context to understand where data is located and what it means. How do you know the difference between the date on an invoice document indicating when the invoice was sent and the date indicating when you should pay the invoice? It's the labels. The labels are what distinguishes one type of date from another. For example, "Invoice Date" for the date the invoice was sent and "Due Date" for the date you need to pay by.

Labels can be a way of classifying documents as well. What does one individual label tell you about a document? Well, maybe not much. However, if you take them all together, they can tell you quite a bit about the kind of document you're looking at. For example, a W-4 employee withholding form is going to use different labels than an employee healthcare enrollment form. These are two very different documents collecting very different information. The labels used to collect this information are thus different as well.

Furthermore, you can even tell the difference between two very closely related documents using labels as well. For example, two different invoices from two different vendors may share some similarity in the labels they use to detail information. But there will be some differences as well. These differences can be useful identifiers to distinguish one from the other. Put all together, labels can act as a thumbprint Grooper can use to classify a document as one Document Type or another.

Even though these two invoices share some labels (highlighted in blue), there are others that are unique to each one (highlighted in yellow). This awareness of how one kind of invoice from one vendor uses labels differently from another can give you a method of classifying these documents using their label sets.


The Labeling Behavior is built on these concepts, collecting and utilizing labels for Document Types in a Content Model for classification and data extraction purposes.

As a Behavior, the Labeling Behavior is enabled on a Content Type object in Grooper.

While you can enable Labeling Behavior on any Content Type, in almost all cases, you will want to enable this Behavior on the Content Model.

Typically, you want to collect and use label sets for multiple Document Types in the Content Model, not just one Document Type individually. Enabling the Behavior on the Content Model will enable the Labeling Behavior for all child Document Types, allowing you to collect and utilize labels for all Document Types.

  1. Here, we have selected a Content Model in the Node Tree.
  2. To add a Behavior, select the Behaviors property and press the ellipsis button at the end.
  3. This will bring up a dialogue window to add various behaviors to the Content Model, including the Labeling Behavior
  4. Add the Labeling Behavior using the "Add" button.
  5. Select Labeling Behavior from the listed options.

  1. Once added, you will see a Labeling Behavior item added to the Behaviors list.
  2. Selecting the Labeling Behavior in the list, you will see property configuration options in the right panel.
    • The configuration options in the property panel pertain to fuzzy matching collected labels as well as constrained and vertical wrapping capabilities to target stacked labels.
    • By default, Grooper presumes you will want to use some fuzzy matching and enable constrained and vertical wrapping. These defaults work well for most use cases. However, you can adjust these properties here as needed.
  3. Press the "OK" button to finish adding the Labeling Behavior and exit this window.

Once the Labeling Behavior is enabled, the next big step is collecting label sets for the various Document Types in your Content Model.

  1. With the Labeling Behavior enabled, you will now see a "Labels" tab present for the Content Model.
    • This tab is also now present for each individual Document Type as well.
  2. Label sets are collected in this tab for each Document Type in the Content Model.

Each Document Type has its own set of labels used to define information on the document. For example, the "Factura" Document Type in this Content Model uses the label "PO Number" to call out the purchase order number on this invoice document. A different Document Type, corresponding to a different invoice format, might use a different label such as "Purchase Order Number" or "PO #".

  1. Ultimately, this is the data we want to collect using the Content Model's Data Model.
  2. We use the "Labels" tab to collect labels corresponding to the various Data Elements (Data Fields, Data Tables, and Data Sections) of the Data Model.
    • This provides a user interface to enter a label identifying the value you wish to collect for the Data Elements.
  3. For example, the label "PO Number" identifies the purchase order number for this invoice.
  4. Therefore, the label "PO Number" is collected for the "Purchase Order Number" Data Field in the Data Model.

For more information on collecting label sets for the Document Types in your Content Model see the How To section of this article.

Once label sets are collected for each Document Type, they can be used for classification and data extraction purposes.

For example, labels were used in this case to:

  1. Classify the document, assigning it the "Factura" Document Type.
  2. Extract all the Data Fields seen here, collecting field based data from the document.
  3. Extract the "Line Items" Data Table, collecting the tabular data seen here.

For more information on how to use labels for these purposes, see the How To section of this article.

How To

The Labeling Behavior (often referred to as "Label Set Behavior" or just "Label Sets") are well suited for structured and semi-structured document sets. Label Sets are particularly useful for situations where you have multiple variations for one kind of document or another. While the information you want to extract from the document set may be the same from variation to variation, how the data is laid out and labeled may be very different from one variation of the document to another. Label Sets allow you to quickly onboard new Document Types to capture new form structures.

We will use invoices for the document set in the following tutorials.

In a perfect world, you'd create a Content Model with a single "Invoice" Document Type whose Data Model would successfully extract all Data Elements for all invoices from all vendors every time no matter what.

This is often not the case. You may find you need to add multiple Document Types to account for variations of an invoice from multiple vendors. Label Sets give you a method of quickly adding to Document Types to model new variations. In our case, we will presume we need to create one Document Type for each vendor.

We will start with five Document Types for invoices from five vendors.

  • Factura
  • Lasku
  • Envoy
  • Rechnung
  • Arve

You may download and import the file below into your own Grooper environment (version 2021). This contains the Batch(es) with the example document(s) discussed in this tutorial and the Content Model(s) configured according to the instructions.

Collect Label Sets

Navigate to the Labels UI

Collecting labels for the Document Types in your Content Model will be the first thing you want to do after enabling the Labeling Behavior. Labels for each Data Element in the Document Type's Data Model are defined using the "Labels" tab of the Content Model.

  1. Navigate to the "Labels" tab of the Content Model.
  2. With a Batch selected in the "Batch Selector" window panel, select a document folder.
  3. Press the "Set Type..." button to set the Document Type whose labels you wish to collect.
  4. This will bring up the "Set Content Type" window.
  5. From this window, select the Document Type for the selected document folder whose labels you wish to collect.
    • In this case, this document is an invoice from "Factura Technology Corp". We have selected the "Factura" Document Type.
  6. Press "OK" to finish.
FYI If you haven't added a Document Type for the selected document folder yet, you can use the "Create Type" button instead to both create a new Document Type and set it.

  1. Upon setting the Document Type the document folder is assigned the selected Document Type
    • Or in other words, this document is now classified as a "Factura" document.
  2. Upon setting a Document Type, that Document Type's Data Model and its child Data Elements will appear in the label collection UI.
    • Labels are primarily collected as they correspond to Data Elements in a Data Model. However, we will see how to add custom labels that don't correlate to a Data Element as well by the end of this tutorial. Custom labels are often used as additional features for document classification.

Click me to return to the top

Collect Field Labels

Now that this document has been classified (assigned a Document Type from our Content Model), we can collect labels for its Document Type. This can be done in one of three ways:

  1. Lassoing the label in the "Document Viewer".
  2. Double-clicking the label in the Document Viewer.
  3. Typing the label in manually.
Going forward, this tutorial presumes you have obtained machine readable text from these documents, either OCR'd text or native text, via the Recognize activity.

Generally the quickest way is by simply lassoing the label in the "Document Viewer".

  1. Select the Data Element whose label you wish to collect.
    • Here, we are selecting the "Invoice Number" Data Field.
  2. Press the "Select Region" button.
  3. With your cursor, lasso around the text label on the document.

  1. Upon lassoing the label in the Document Viewer, the OCR'd or native text behind the selected region will be used to populate the Data Element's label.
    • At this point, the label for the "Invoice Number" Data Field is now "Invoice Number" because that's the text data we selected. Whatever text characters you lasso with your cursor will be assigned as the label.
  2. Notice this label also now appears in the Header tab below. That's because we had the Header tab selected when we lassoed the label.
    • The text collected here ("Invoice Number") is the Header label for the "Invoice Number" Data Field.
    • We'll talk about the difference between Header, Footer, and Static labels at later points. This will be important when using labels for data extraction purposes.
    • However, think of the Header label as the "main" label identifying the Data Element's value. The Header label indicates where the data "starts" (If you're a human reading the document, you start looking for the corresponding data after you find and read the Header label. The same is true for Grooper).

If you choose, you may also manually enter a label for a Data Element by simply typing it into the text box.

  1. Here we've selected the "Purchase Order Number" Data Field and entered "PO Number".
  2. This will correspond to the label "PO Number" on the document itself.
Whether lassoing the text using the Document Viewer or manually typing into the textbox, you may collect a maximum of one Header label and one Footer label per Data Element (and one Static label for Data Fields) per Document Type.

  1. Upon entering the label into the text box, just you'll see the label in the Header tab, just like we saw when we collected a label by lassoing the text on the Document Viewer.
  2. Notice as well, there is a green checkmark next to the Header tab (and the box below is highlighted green).
    • This means the text label is matching something on the document. If it did not, you would see a red "X" next to the Header tab and the box below would be highlighted red.
  3. Also note, since this label is being returned on this document, we can verify it in the Document Viewer. The selected Data Field ("Purchase Order Number") and it's text label are highlighted green on the document, indicating 1) it was successfully located on the document and 2) where it was located.

  1. Continue lassoing or manually entering labels until all are collected.
  2. Next, we will focus on collecting labels from tables and table columns (the Data Table and Data Column elements in a Data Model). The process is essentially the same, but bears some extra explanation.

Click me to return to the top

Collect Table and Column Labels

Table and column labels can be used for tabular data extraction as well, setting a Data Table object to use the Tabular Layout Extract Method.

When collecting labels for this method of table extraction, keep in mind you must collect the individual column headers, and may optionally collect both the full row of column header labels as well.

While it is optional, it is generally regarded as best practice to capture the full row of column header labels. This will generally increase the accuracy of your column label extraction. We will do both in this tutorial.

  1. We will collect the full row of column header labels for the Data Table object's label.
  2. We will collect each individual column header label for each individual Data Column object's label.

This may seem like you are duplicating your efforts but it is often critical to do both in order for the Tabular Layout Extract Method to map the table's structure and ultimately collect the table's data.

  • In particular if you are dealing with OCR text data containing inaccurate character recognition data, establishing the full header row for the table will boost the fuzzy matching capabilities of the Labeling Behavior.

  1. To collect the Data Table's label, select the Data Table object in the Labels tab.
    • Here, we've selected the Data Table named "Line Items".
  2. Lasso the entire header row for the table.
    • You may notice there are more columns on this table than we are collecting. As it is on the document, the table has six columns. But we're only collecting four, the "Quantity", "Description", "Unit Price", and "Line Total" Data Columns.
    • Generally, you should collect the whole row of column headers, even if there are extra columns whose data you are not collecting.

  1. Next, collect each child Data Column's header label.
    • Here, we've selected the "Quantity" Data Column.
  2. Lasso the individual column header for the selected Data Column.
    • Here, the stacked label, "Qty. Ord.".

  1. Continue collecting labels for the remaining Data Columns.
  2. We have four Data Columns for this Data Table. Therefore, we collect four header labels from the document.

Click me to return to the top

Auto Map Labels

As you add labels for each Document Type, you may find some documents have labels in common. For example, there are only so many ways to label an invoice number. It might be "Invoice Number", "Invoice No", "Invoice #" or even just "Invoice". Some invoices are going to use one label, others another.

When collecting labels for multiple Document Types you can use the "Auto Map" feature to automatically add labels you've previously collected on another Document Type.

  1. So far, we've only collected labels for one Document Type, the "Factura" Document Type.
  2. Now, we're collecting labels for the "Lasku" Document Type.
  3. Press the "Auto Map" button to automatically assign previously collected labels,

Grooper will search the document's text for labels matching those previously collected on other Document Types.

  1. For example, we collected the label "Remit To:" for the "Remit Address" Data Field for the "Factura" Document Type. The "Auto Map" feature found a match for this label on the document and assigned the "Lasku" Document Type's "Remit Address" Data Field the same label.

If a match is not found, the Data Element's label is left blank.

  1. For example, the label for the "Invoice Amount" Data Field for the "Factura" Document Type was "Amount due".
  2. This label was nowhere to be found on this document. The invoice amount is labeled "Total" on the "Lasku" documents. So, the label is left blank for you to collect.

As you keep collecting labels for more and more Document Types, the Auto Map feature will pick up more and more labels, allowing you to quickly onboard new Document Types.

Be aware, you may still need to validate the auto mapped values and make adjustments.

  1. For example, the label "Date" is very generic.
  2. This label does actually correspond to the invoice date on the "Lasku" Document Type in this case.
  3. However, that could label some other date on another Document Type. Even on this document, the label "Date" is returning the "Date" portion of "Ship Date" and other instances where "Date" is found in the text.
    • As a side note, there are ways to make simple labels like "Date" more specific to the data they pertain to using "Custom Labels". More on that in the next tab.
  4. You can also make minor adjustments to the mapped labels.
    • The mapped label for the "Purchase Order Number" Data Field was "PO Number" (as it was collected for the "Factura" Document Type), but it is more specifically "PO Number:" on the "Lasku" documents. We can just add the colon at the end of the label manually.

Click me to return to the top

Collect Custom Labels

It's important to keep in mind labels are collected for corresponding Data Elements in a Data Model. You collect one label per Data Element (Data Field, Data Section, Data Table or Data Column). What if you want to collect a label that is distinct from a Data Element, one that doesn't necessarily have to do with a value collected by your Data Model? And why would you even want to?

That's what "Custom Labels" are for. Custom labels serve two primary functions:

  1. Providing additional labels for classification purposes.
  2. Providing context labels when a Data Element's label matches multiple points on a document

Custom Labels may only be added to Data Model, Data Section or Data Table objects' labels. Put another way, any Data Element in the Data Model's hierarchy that can have child Data Elements can have custom labels.

When used for classification purposes, custom labels are typically added to the Data Model itself.

  1. First select the Data Element in the Data Model's hierarchy to which you wish to add the label.
    • In our case, we're selecting the Data Model itself.
  2. Right-click either the Header or Footer tab.
  3. Press the "Add Custom Label..." button.
  4. The following dialogue box will appear.
  5. You may enter a name for the custom label, or use the default "Custom ##" naming convention.
  6. Press the "OK" button when finished.

  1. This will add a new label tab, named whatever you named it in the previous step.
    • Here, we kept with the default "Custom 01" name.
    • Notice the red "X" next to the name "Custom 01" as well. This indicates the label is not matching anything on the document. Currently the label is "Custom 01", which doesn't appear anywhere on the document. We need to change that by collecting a new label.
  2. Collect the custom label by either lassoing the text using the Document Viewer or manually typing in the label.
    • For example, the word "Invoice" might be a useful label for classification purposes. This label isn't used to collect anything in our Data Model, but might be helpful to identify this and other invoices from the Factura Technology Corp as "Factura" Document Types. Collecting the label "Invoice" as a Custom Label will allow us to use it as a feature of this Document Type for classification.

You may add more Custom Labels to the selected Data Element by repeating the process described above.

  1. Right-click any of the label tabs.
  2. Add a new label with the "Add Custom Label..." button.

Custom Labels as Context Labels

Some labels are more specific than others. The label "Invoice Date" is more specific than the label "Date". If you see the label "Invoice Date" you know the date you're looking at is the date the invoice was generated. The label "Date" may refer to the invoice's generation date or it could be part of another label like "Due Date". However, some invoice formats will label the invoice date as simply "Date".

  1. For example, the label "Date" on this "Factura" Document Type does indeed correspond to the invoice date for the "Invoice Date" Data Field.
  2. However, this label pops up as part of other labels too, such as the "Date" in "Due Date" or "Order Date".

This can present a challenge for data extraction. The possibilities for false-positive results tend to crop up the more generic the label used to identify a desired value. There are three separate date values identified by the word "Date" (in full or in part) on this document.

This is the second reason Custom Labels are typically added for a Document Type, to provide extra context for generic labels, especially when they produce multiple results on a document, leading to false-positive data extraction.

There are two steps to adding and using a Custom Label for this purpose:

  1. Add the Custom Label.
  2. Marry the Custom Label with the Data Element's label.

We will refer to this type of a Custom Label as a "Context Label" from here out.

The only "trick" to this is adding the Context Label to the appropriate level of the Data Model's hierarchy.

Remember, a Custom Label may only be added to a Data Model, Data Section or Data Table object. We cannot add a Custom Label to a Data Field, such as the "Invoice Number" Data Field.

To add a Context Label a Data Field can use, we must add the Custom Label to its direct parent Data Element.

  1. In the case of the "Invoice Date" Data Field its direct parent Data Element is the Data Model itself.
  2. Right-click the "Header" or "Footer" tab and select "Add Custom Label..." to add the Custom Label.

  1. The Custom Label we added was "Date Page".
  2. This will provide the simple label "Date" some extra context.
    • Which of the three results for the label "Date" do we want to accept? The one falling within this zone.

Now that we've added the label, we need to marry the Custom Label with the Data Field its giving extra context to. This is done with the Parent property of a Data Field label.

  1. In our case, the Custom Label provides extra context for the "Invoice Date" Data Field's label. We've selected the "Invoice Date" Data Field.
  2. Select the Parent property.
    • Note: This property is only present for Data Field and Data Column labels.
  3. Using the drop down list, select the Custom Label you wish to use for the Context Label.

  1. Notice with this Context Label added...
  2. ...We only return a single result for the "Invoice Date" Data Field's label "Date". This is the label we want to associate with this Data Field.
  3. The other two results do not fall within the Context Label, and are no longer returned.

Click me to return to the top

Label Sets & Classification

About Labelset-Based Classification

Labelset-Based is a Classification Method that leverages the labels defined via a Labeling Behavior to classify folder Batch Folders.

Label Sets can be used for classifying documents using the Labelset-Based Classification Method. For structured and semi-structured forms labels end up being a way of identifying a document. Without the field data entered, the labels are really what define the document. You know what kind of document you're looking at based on what kind of information is presented and in the case of Labelset-Based classification how that data is labeled. Even when those labels are very similar from one variant to the next, they end up being a thumbprint of that variant. For example, you might use Labelset-Based classification to create Document Types for different variations of invoices from different vendors. The information presented on each variant from each vendor will be more or less the same, and some labels will be more commonly used by different vendors (such as "Invoice Number"). However, if there is enough variation in the set of labels, you can easily differentiate an invoice from one vendor verses another just based on the variation in labels.

Take these four "documents". Each one is collecting the same information:

  • A person's name
  • Their social security number
  • Their birthday
  • Their phone number
  • Their address

So we might have five Data Fields in our Data Model, one for each piece of information. We'd also collect one label for each Data Field as well.

While the data we want from these documents is the same, there is some variation in the labels used for each different document type. If we wanted to distinguish these four documents from each other by classifying using the Labelset-Based Classification Method. This is all done measuring the similarity between the collected label sets for each Document Type.


How is Document Type "B" different from Document Type "A"?

  • It uses the label SSN: instead of Social Security Number:.

How is Document Type "C" different from Document Type "A"?

  • It uses the labels SSN: instead of Social Security Number: and DOB: instead of Date of Birth:.

How is Document Type "D" different from Document Type "A"?

  • It uses the labels SSN: instead of Social Security Number:, DOB: instead of Date of Birth:, and Phone #: instead of Phone Number.

Using the Labelset-Based Classification Method unclassified documents are classified by assigning the document the Document Type whose labels are most similar. The basic concept is "similarity" is determined by how many labels are shared between the unclassified document and the label sets collected for the Document Types in your Content Model. The unclassified document is assigned the Document Type with the highest degree of similarity between matched labels and the Document Types' label sets.

The similarity calculation is very straightforward. Grooper searches for labels collected for every Document Type and measures the total character difference between all the labels matched on the document.

If each of these five labels is collected for each Document Type's Label Set, you'd have the following character totals for the set.

  • Document Type "A" - 63 total label characters.
  • Document Type "B" - 44 total label characters.
  • Document Type "C" - 34 total label characters.
  • Document Type "D" - 29 total label characters.

How similar is Document Type "A" to Document Type "B"?

  • "A" uses the label Social Security Number: instead of SSN:
  • However, there is a match for the remaining four labels.
  • The remaining four labels, Name:, Date of Birth, Phone Number: and Address: are comprised of 40 characters.
  • The similarity score is the percentage of matched label characters divided by the total characters in the Document Type's label set.
    • 40 matched label characters / 44 total label characters = 0.9091
    • "A" is roughly 91% similar to "B"

How similar is Document Type "A" to Document Type "C"?

  • "A" uses the label Social Security Number: instead of SSN: and Date of Birth instead of DOB:
  • However, there is a match for the remaining three labels.
  • The remaining three labels, Name:, Phone Number: and Address: are comprised of 26 characters.
  • The similarity score is the percentage of matched label characters divided by the total characters in the Document Type's label set.
    • 26 matched label characters / 34 total label characters = 0.7647
    • "A" is roughly 76% similar to "B"

How similar is Document Type "A" to Document Type "D"?

  • Figure out what labels from "A" match "D", and do the math.

If we ran one of these "documents" into Grooper, we can see these results very clearly.

  1. The document shares all five labels in common with the "A" Document Type.
  2. Grooper searches for labels matching the label sets for all Document Types in the Content Model and creates a similarity score for each one.
    • You can see the math described above play out here. Matching all labels in the "A" Document Type's label set, the document is considered 100% similar. Less so for the other Document Types because while they share some labels (like Name:), some are different (like Social Security Number: versus SSN:
  3. Upon classification, the document folder is assigned the Document Type with the highest similarity score.
    • In this case the "A" Document Type.

Configuring Labelset-Based Classification

Next, we will walk through the steps required to enable and configure the Labelset-Based Classification Method, using our example set of invoice documents.

The basic steps are as follows:

  1. Set the Content Model's Classification Method property to Labelset-Based
  2. Collect labels for each Document Type
  3. Test classification
  4. Reconfigure, updating existing Document Types' Label Sets and adding new Document Types as needed.

Assign the Labelset-Based Classification Method

Once you've figured out you want to use Label Sets to classify your documents, you need to tell your Content Model that's what you want to do! This is done by setting the Content Model's Classification Method property to Labelset-Based.

  1. Select a Content Model in the Node Tree.
    • We've selected the "Labeling Behavior - Invoices" Content Model we've been working with in this How To section of the article.
  2. Select the Classification Method property.
  3. Using the dropdown menu, select Labelset-Based from the list of options.

Next, we will collect labels for each Document Type in the Content Model.

  1. Note we've already added a Labeling Behavior to the Behaviors property.
    • It doesn't matter whether you add a Labeling Behavior and/or collect labels before selecting Labelset-Based for the Classification Method' or after.
    • However, you will need to add the Labeling Behavior at some point in order to collect label sets for the Document Types and ultimately use the Labelset-Based method for document classification. Visit the tutorial above if you're unsure how to add the Labeling Behavior to the Content Model.

Click me to return to the top

Collect Labels

See the above how to (Collect Label Sets) for a full explanation of how to collect labels for Document Types in a Content Model. The rest of this tutorial will presume you have general familiarity with collecting labels.

  1. Switch to the "Labels" tab.
  2. Collect labels for each Data Element in the Document Type's Data Model.
  3. Collect labels for each Document Type in the Data Model.

Table Labels and Classification

Table headers are often very useful for Labelset-Based classification, and it generally is the case you want to use them as a classification feature. Currently, if you want to use a Data Table object's labels for classification, you must set the Data Table's Minimum Row Count property to at least "1". This is a known issue in the current version of Grooper and likely will change.

However, if you find Data Table and/or Data Column labels are not included in determining document similarity during classification, do the following:

  1. Navigate to the Data Table object in the Node Tree.
  2. Expand the Row Count Range property.
  3. Select the Minimum property.
  4. Enter 1.

If you have multiple Data Table objects in your Data Model, you will need to repeat these steps for each one.

Click me to return to the top

Test Classification

In general, regardless of the Classification Method used, one of three things is going to happen to Batch Folders in a Batch during classification.

  1. The folder will be assigned the correct Document Type.
  2. The folder will be assigned the wrong Document Type.
  3. The folder will be assigned no Document Type at all.

The Labelset-Based method is no different. If all folders are classified correctly, that's great. However, testing is all about ensuring this is the case and figuring out where and why problems arise when folders are classified wrong or not classified at all.

We will look at a couple examples of how classification can go wrong using the Labelset-Based method, why that is the case, and what to do about it.

FYI

The example Batch in the rest of this tutorial is purposefully small to illustrate a few key points. In the real world, you will want to test using a much larger batch with several examples of each Document Type.

  1. The easiest way to test classification (for the Labelset-Based method or any other) is with the "Classification Testing" tab of the Content Model.
  2. Select a test Batch with the "Batch Selector" dropdown window.
  3. Select a Batch Folder and press the "Classify" button to classify a single document folder.
  4. Select a Batch Folder and press the "Classify All" button to classify all document folders in the Batch.

Now we just need to evaluate the success or failure of our classification. Let's look at a few documents in our Batch before detailing what we will do to resolve any classification errors.

  1. This is a complete success!
    • The Batch Folder has been assigned the "Factura" Document Type.
  2. It indeed should have been classified so. It is an invoice from the Factura Technology Corp.
  3. Its similarity score is 100% similar to the "Factura" Document Type.
    • This means a match has been found for all labels in the "Factura" Document Type's label set.

  1. This is a mitigated success.
    • The Batch Folder has been assigned the "Envoy" Document Type.
  2. It indeed should have been classified so. It is an invoice from Envoy Imaging Systems.
  3. However, it's a mitigated success in that its similarity score is only 85%.
    • That means only 85% of the labels located on this document match the label set for the "Envoy" Document Type.
  4. In this case, this is due to poor OCR data. While some labels may be present on the document, their OCR data is too garbled to match the label in the label set.
    • For example, the label Invoice was not matched because the text was OCR'd as "nvoice".
    • But a win is a win! Part of the reason Labelset-Based can be an effective classification method is you can miss a few labels due to poor OCR and still end up classifying the document appropriately. It is the set as a whole which determines similarity. As long as the document is more similar to the correct Document Type than any of the other Document Types, Grooper has made the right classification decision.

  1. This is a mitigated failure.
    • The Batch Folder should have been assigned the "Envoy" Document Type but it was unclassified.
  2. This is due to its similarity to the "Envoy" Document Type's Label Set falling bellow 60%.
    • 60% is the default Minimum Similarity for this Content Model. If a Batch Folder fails to achieve a similarity score above 60%, it will remain unclassified, as is the case here.
    • But that's so close! It just fell short in terms of similarity between matched labels and the "Envoy" Document Type's Label Set.
  3. In this case, several of the labels for the Data Elements of our Data Model are smudged on the document. OCR was unable to return these portions of the document. Therefore, the label's were not matched.
  4. Remember we collect one label per Data Element. However, there's all kinds of labels on this document for data we don't necessarily care about. Do we have a Data Field for the "Salesperson ID" field on this invoice? No, it's not data we're choosing to collect.
    • But just because we don't have a Data Field for it doesn't mean it's not a useful label for classification. We will look at how to create custom labels for classification purposes in the next section, Common Problems and Solutions.

  1. This is also a mitigated failure.
    • The Batch Folder should have been assigned a "Stuff and Things" Document Type but it was unclassified.
  2. This is a variation of an invoice from the vendor "Stuff and Things"
  3. You may notice the "Stuff and Things" Document Type does not appear at any similarity in our similarity list.
  4. That's because there isn't a "Stuff and Things" Document Type yet. We need to add one and collect labels for it.
    • This is fairly common with a Labelset-Based approach to classification (and indeed the use of Label Sets in general). It often has its most utility in situations where you have a lot of variants of one particular kind of document. The general idea is to use Label Sets to distinguish between the variants by creating one Document Type for each variant, each with their own unique Label Set.
    • Such is the case with invoices. There's lots of different invoice formats, often unique to each vendor. When you get one in a Batch you haven't seen before, you will need to add a new Document Type to account for the new variant. However, as we will see in the next section, onboarding new Document Types with Label Sets is relatively quick and painless.

  1. This is a more severe version of the failure seen in the previous example.
    • The Batch Folder should have been assigned a "Standard" Document Type but it was assigned the wrong Document Type, the "Rechnung" Document Type.
  2. However, we don't have a "Standard" Document Type yet. Just like the previous example, we will need to add one and collect labels for it.
  3. The only think we will need to watch out for is making sure once we do add a Document Type for the invoices from Standard Products, it classifies more confidently than the "Rechnung" Document Type, beating out its similarity score and receiving the "Standard" Document Type.

  1. This is a complete failure.
    • The Batch Folder should have been assigned the "Envoy" Document Type but it was unclassified.
  2. The document is of poor enough quality to get near unusable OCR results.
  3. This resulted in a paltry similarity score of 23%.

What can we do about this?

Sometimes you have to know when to stop. Will it be worth it to reconfigure your Content Model and Label Sets to force Grooper to classify this document in one way or another? Probably not. This is more likely than not an extreme outlier, not representative of the larger document set. It may be easier to kick this document (and other outliers) out to human review, especially if reconfiguring the Content Model is going to negatively impact results in other ways.

You have to know when to leave well enough alone. Outliers like this are a good example of when to do just that.

Click me to return to the top

Common Problems and Solutions

Custom Labels to Boost Similarity

  1. In the above tutorial, we saw this document failed to classify correctly.
    • The Batch Folder should have been assigned the "Envoy" Document Type but it was unclassified.
  2. This is due to its similarity to the "Envoy" Document Type's Label Set falling bellow 60%.
    • 60% is the default Minimum Similarity for this Content Model. If a Batch Folder fails to achieve a similarity score above 60%, it will remain unclassified, as is the case here.
    • But that's so close! It just fell short in terms of similarity between matched labels and the "Envoy" Document Type's Label Set.
  3. In this case, several of the labels for the Data Elements of our Data Model are smudged on the document. OCR was unable to return these portions of the document. Therefore, the label's were not matched.
  4. Remember we collect one label per Data Element. However, there's all kinds of labels on this document for data we don't necessarily care about. Do we have a Data Field for the "Salesperson ID" field on this invoice? No, it's not data we're choosing to collect.

Just because we don't have a Data Field for it doesn't mean it's not a useful label for classification. Even though we don't need to extract the salesperson's identification number, the fact that label "Salesperson ID" is present on these invoices could be important. It's another feature that makes up the "Envoy" Document Type. We just need a way of telling Grooper to use this label for classification, even though we can ignore it when it comes time to extract data from these documents.

That is one of the reasons for adding custom labels to a Document Type's Label Set.

  1. To add a custom label, first navigate to the "Labels" tab of the Content Model.
  2. Either:
    1. Select a document folder in the Batch selector of the desired Document Type.
    2. Or select a document folder and use the "Set Type..." button to assign it the desired Document Type.
      • In our case we want to add a custom label to the "Envoy" Document Type. We have selected the document folder in the Batch and assigned it the "Envoy" Document Type.
  3. Select a Data Element from the Data Model to which you wish to add the custom label.
    • Most commonly, when adding a custom label for classification purposes, you'll just want to add it to the Data Model root itself, as we've selected here.
  4. Right-click one of the label tabs, "Header" or "Footer".
    • It doesn't matter which one, you just need to right click any label tab.
  5. Select "Add Custom Label..."

  1. This will bring up the following "Add Custom Label" window.
  2. Name the custom label whatever you like.
  3. Press the "OK" button to add the label.

  1. Adding the custom label will add a new label tab named whatever you named it.
    • In this case "Salesperson ID".
  2. Using the text editor, collect the label (either typing it in or lassoing or double clicking it in the document viewer).
    • In this case, Salesperson ID
    • FYI: Grooper will automatically enter whatever you title the custom label. So, in our case, all we actually did was change the custom label's name to "Salesperson ID" and the label Salesperson ID was automatically populated.
  3. This will add the label "Salesperson ID" to the "Envoy" Document Type's Label Set.

Now that this label is in the Label Set, it will be considered a label during classification. The label's there. It's part of the document, whether we're extracting the value or not. We "tell" Grooper labels like these should be considered features for classification by creating custom labels.

FYI You can add as many custom labels as you want.

Indeed, you may want multiple custom labels, adding more label features that distinguish one Document Type to another. To add multiple custom labels, just repeat the process described above, right-clicking the label tabs and adding a new custom label for each label you want to collect.

When we re-classify this Batch, we will see some different results.

  1. Navigate to the "Classification Testing" tab to test classification with the custom label added.
  2. Press the "Classify All" button to classify all document folders in the Batch.
  3. Notice this document now classifies correctly as an "Envoy" Document Type!
  4. Before we added the custom label, this only achieved a similarity score of 59%, falling short of the 60% minimum similarity threshold. Now, it scores a 63% similarity.
    • With another label added to the Label Set, there's more context to what comprises this Document Type.
    • And that's with just one custom label added. There are tons more labels we could collect as custom labels on the document, likely further increasing the similarity score.

Click me to return to the top

Adding New Document Types

The Labelset-Based classification method makes some assumptions about your document processing approach. It shines with structured and semi-structured forms. Labels, more or less, "stay put" on these kinds of documents. You'll see the same field labels over and over again even though the field values will change from document to document. This presumes your Document Types will be very regular (or rigid, with one Label Set very specifically corresponding to one Document Type). If you encounter a new form or variant of an existing form, you likely will need to account for it with a new Document Type.

  1. Such is the case for this document we encountered in the previous tutorial.
  2. The document is unclassified because it doesn't match any of the Label Sets for the existing Document Types.
    • More specifically, its similarity score to the existing Document Types does not meet the 60% minimum similarity threshold for this Content Model.
  3. This should be a "Stuff and Things" Document Type, but we don't have one yet. We need to add it and collect its Label Set to correctly classify the document.

Luckily, the process of adding new Document Types and defining their label sets is quick and painless and actually can become easier the more Document Types you add to the Content Model.

You can do the whole thing in the "Labels" tab of the Content Model.

  1. Navigate to the "Labels" tab in the Content Model.
  2. Select the unclassified document folder for which you want to create a new Document Type.
  3. Press the "Create Type..." button.
  4. This will pop up the following window to add a Document Type.
  5. Name the Document Type whatever you like.
    • In our case we named it "Stuff and Things", for the invoice from the very real company, Stuff and Things, that sells stuff, as well as things.
  6. Press the "OK" button to finish and add the Document Type.

  1. This will add the Document Type to the Content Model
  2. It will also assign the Document Type to the selected document folder in the test Batch.
  3. Collect labels for the document as discussed in the Collect Labels section of this article.

That's it! You've added a new Document Type and collected its Label Set.

  • Keep in mind, as you add new Document Types to the Content Model you will want to perform regression testing to ensure your classification model is still accurate.

As you keep adding more and more Document Types to the Content Model, you will inevitably keep adding more and more labels for the Data Elements in your Data Model. Eventually, you will come across a new document variant that shares a lot of similarity with an already existing Document Type.

  1. Such was the case with these three documents. They were confidently classified as "Rechnung" Document Types.
  2. Their similarity is 85% - 87%.
  3. However, these aren't invoices from the vendor Rechnung, they are from the vendor Standard Products.
    • They simply share a lot of the same labels. Interestingly, this "problem" is actually going to end up making our job even easier when adding the new Document Type.

This is where the label auto-map functionality comes in handy.

  1. Add the new Document Type
  2. Assign the right document folder (whose labels you want to collect) the new Document Type.
  3. Press the "Auto-Map" button.

Grooper will search for matching labels already collected in the Label Sets of other Document Types.

  1. In this case, there was some kind of matching label from another Document Type for nearly every Data Element in the Data Model.
  2. The only thing we have to do now is review the auto-mapped labels, collect any that were not mapped, and re-collect or edit any labels that are not accurate.
    • For example, this header label for the "Line Items" Data Table is not quite right. It's red and not green because there's another column header label in the Standard invoices' line items table.

  1. Upon collecting the full header label for this table, everything matches!
    • That's it! We've added a new Document Type and were able to auto-map all labels except one with the press of a button.
  2. We can now press "Save", test our classification, and see if these documents classify correctly.

  1. We will test out our new Document Type using the "Classification Testing" tab.
  2. Press the "Classify All" button.
  3. With the Document Type for the Standard invoices, and its Label Set collected, these three document folder are now classified correctly.

Click me to return to the top

Volatile Labels

Sometimes, you will collect a label you do not want to use for classification purposes. Most often, this is because the label may or may not be present depending on the document.

For example, some of these invoices from Standard Products have the sales tax totaled on the document. However, some do not.

This is called a "Volatile" label. Its presence on a document is unpredictable. Sometimes it's there. Sometimes it's not. It's an optional piece of information. However, because it's optional (or "volatile") we don't actually want to include this as a label for classification. It's going to decrease the similarity score for documents who do not contain the label.

  1. For example, the selected document here does not have the tax listed on the document.
  2. Since that label is not present, its similarity is lower than if it were present.
    • It drops from 100% to 98% in this case. Now, this may not be a critical drop in similarity for this case, but very well could be for others depending on their OCR quality or presence of multiple volatile labels.

You can indicate these kinds of labels are "volatile" and should not be considered for classification. Whether it's there or not, Grooper will not include it as a feature to measure the similarity between an unclassified document and the Document Type.

  1. To do this, navigate to the "Labels" tab of the Content Model.
  2. Select the Data Element whose label you wish to turn volatile.
    • In our case, we wish to make the "Tax" Data Field's label volatile. As we've seen, sometimes its present on the document and sometimes it's not.
  3. Change the Volatile property from False to True.

  1. Now, when we classify this document folder...
  2. ...even though the sales tax label is not present on the document...
  3. ...its similarity is 100%!
    • With the label, Tax set as a volatile label, it is no longer considered during the similarity calculation. With it missing from the document, it no longer negatively impacts the similarity score.

Click me to return to the top

Table Labels and Classification

Table headers are often very useful for Labelset-Based classification, and it generally is the case you want to use them as a classification feature. Currently, if you want to use a Data Table object's labels for classification, you must set the Data Table's Minimum Row Count property to at least "1". This is a known issue in the current version of Grooper and likely will change.

However, if you find Data Table and/or Data Column labels are not included in determining document similarity during classification, do the following:

  1. Navigate to the Data Table object in the Node Tree.
  2. Expand the Row Count Range property.
  3. Select the Minimum property.
  4. Enter 1.

If you have multiple Data Table objects in your Data Model, you will need to repeat these steps for each one.

Use Label Sets for Field Based Extraction

Label Sets and the Labeled Value Extractor Type

Intro to The Labeled Value Extractor

For most static field based extraction, the Labeling Behavior leverages the Labeled Value Extractor Type. Let's first briefly examine how Labeled Value works outside of the Labeling Behavior functionality.

As the name implies, Labeled Value extractor is designed to return labeled values. A common feature of structured forms is to divide information across a series of fields. But it's not as if you just have a bunch of data randomly strewn throughout the document. Typically, the field's value will be identified by some kind of label. These labels provide the critical context to what the data refers to.

Labeled Value relies on the spatial relationship between the label and the value. Most often labels and their corresponding values are aligned in one of two ways.

1. The value will be to the right of the label.

2. The value will be below the label.

Labeled Value uses two extractors itself, one to find the label and another for the value. If the two extractors results are aligned horizontally or vertically within a certain amount of space (according to how the Labeled Value extractor is configured), the value's result is returned.

  1. For example, we could configure this "Invoice Number" Data Field to utilize the Labeled Value extractor to return the invoice number on the document.
    • Keep in mind this is the "hard" way of doing things. As we will see, the Labeling Behavior will make this process easier.
  2. We've set the Value Extractor to Labeled Value
  3. The label is returned by the Label Extractor
    • Here, set to a Pattern Match extractor using the regex pattern Invoice Number
  4. The value is returned by the Value Extractor
    • Here, set to a Pattern Match extractor using the regex pattern [A-Z]{2}[0-9]{6}
  5. The Maximum Distance property is used to determine alignment relationship between the label and the value as well as the maximum distance between the label and value.
    • The default settings are used here, indicating the value can be aligned horizontally, up to 2 inches from the right of the label, or it can be aligned vertically, up to 2 inches below the label.
  6. Upon execution, the Label Extractor first finds the label, then looks to see if anything matching the Value Extractor is located according to its layout configuration.
    • Sure enough, there is a result, "IN165798".
  7. The Value Extractor's result is collected for the Data Field upon running the Extract activity.

However, the Labeled Value extractor's set up is a little different when combining it with the Labeling Behavior. The end result is a simpler configuration, utilizing collected labels for the Label Extractor.

Click me to return to the top

Label Sets and Labeled Value

Since this Content Model utilizes the Labeling Behavior, at least part of the setup described in the previous tab was unnecessary. If you've collected a label for the Data Field and that Data Field's Value Extractor is set to Labeled Value, there is no need to configure a Label Extractor. Instead, Grooper will pass through the collected label to the Labeled Value extractor.

  1. For example, we've already collected a label for the "Invoice Number" Data Field for the "Factura" Document Type.
  2. The label Invoice Number is returned on the document for the label identifying the document's invoice number.

  1. With the label collected, the set up for this "Invoice Number" Data Field will be much simpler.
  2. Notice the Value Extractor has been set to Labeled Value.
  3. The Label Extractor and Value Extractor are unconfigured (or "blank").
  4. However, upon testing extraction, the invoice number is collected.
    • All that was required, in this case was to collect the label and set the Data Field's Value Extractor property to Labeled Value. Magic!
    • Not magic. Label sets.
  5. With Labeling Behavior enabled and a label collected for the "Invoice Number" Data Field, the Labeled Value extractor's Label Extractor looks for a match for the collected label.
    • In this case Invoice Number.
  6. Furthermore, with Labeling Behavior enabled and a collected label utilized as the Label Extractor, the Labeled Value extractor's Value Extractor will still return a value even if left unconfigured.
    • It will look for the nearest simple segment according to the layout settings (the Maximum Distance and Maximum Noise property).
    • The result "IN165796" is indeed the nearest simple segment and the desired result. So, there is technically nothing else we need to do. However, situations are rarely this simple and straightforward. There are some other considerations we should keep in mind.

While you can get a result without configuring the Labeled Value extractor's Value Extractor, that doesn't mean you should.

It is considered best practice to always configure the Value Extractor.

Click me to return to the top

Best Practice Considerations

While you can get a result without configuring the Labeled Value extractor's Value Extractor, that doesn't mean you should. It is considered best practice to always configure the Value Extractor.

So, why is it considered best practice to do so. The short answer is to increase the accuracy of your data extraction. A simple segment could be anything. If you know the data you're trying to extract has a certain pattern to it, you should target that data according to its pattern. Dates, for example, follow a few different patterns. Maybe it's "07/20/1969" or "07-20-69" or "July 20, 1969", but you know it's a date because it has a specific syntax or pattern to it. To increase the accuracy of your extraction, you should configure the Value Reader with an extractor that returns the kind of data you're attempting to return.

We can see fairly quickly why leaving the Labeled Value extractor's Value Extractor unconfigured is not ideal.

  1. All the Data Fields in this Data Section have collected labels and are using the Labeled Value extractor.
    • Except the "Vendor Name" Data Field. Ignore this Data Field for the time being.
  2. We only get a few accurate results.
    • Without its Value Extractor configured, the 'Labeled Value extractor is going to grab whatever segment it can get. While it can be what you want, it is not necessarily what you want.
      • The Value Extractor will allow you to target more specifically what you want to return.
    • Furthermore, while the "Sales Tax" and "Invoice Amount" results may look accurate, they too are not. There are some OCR errors. The extracted segments "0,00" and "54.594.00" should be returned as "0.00" and "54,594.00".
      • The Value Extractor will also allow you to utilize Fuzzy RegEx, Lexicon lookups, output formatting, Data Type Collation methods and other extractor functionalities to manipulate, format, and filter results.
  3. For example, the "Date" Data Field returns the segment "Page" to the right of the label Date where it should be returning the date below it, "Feb 26, 2014".
    • If we were instead to configure the Labeled Value extractor's Value Extractor to only return dates, we'd get the more specific result we want and not the generic segment we don't.
    • FYI: When the Value Extractor property is left unconfigured in this manner, the Labeled Value extractor follows a "horizontal then vertical" order of operations. If both a Right Maximum Distance and a Bottom Maximum Distance are configured, it will look for results to the right of the label (aligned horizontally) before looking for results below the label (aligned vertically).

  1. If we reconfigure this "Invoice Date" Data Field slightly we will get a much more accurate result.
  2. We've kept the Data Field's Value Extractor set to Labeled Value.
  3. The only thing we've changed is we've set the Labeled Value extractor's Value Extractor to a Reference extractor pointing to a Data Type returning dates.
  4. Upon testing extraction, we can see now the Data Field collects the value we want, the invoice's date "02/26/2014"
  5. By configuring the Labeled Value extractor's Value Extractor, it's no longer looking for just simple segments next to the label. So, the word "Page" is no longer returned. Instead, it's looking for results matching the Value Extractor's results.
    • This increases the specificity of what the Labeled Value returns. Increased specificity yields increased accuracy.

Configuring the Labeled Value extractor's Value Extractor also gives you the myriad of functionalities available to extractors. For example, Fuzzy RegEx is one of the main ways Grooper gets around poor OCR data at the time of extraction. When the text data is just a couple characters off of the extractor's regex pattern, Fuzzy RegEx can not only match the imperfect data but "swap" the wrong characters for the right ones, effectively cleansing your result.

  1. Take the "Invoice Amount" Data Field for example.
  2. Here, the Data Field's Value Extractor is set to Labeled Value.
  3. And, the Labeled Value extractor's Value Extractor is left unconfigured.
  4. The Labeled Value extractor first locates the collected label Amount Due and without a configured Value Extractor returns the nearest text segment (according to the Maximum Distance settings).
  5. This is almost the result we want.
    • It's the "right" result in that, yes, that is the text segment that corresponds to the invoice amount due for this invoice.
    • But it's very much the wrong result in that the OCR text data is inaccurate. "54.954.00" is not a valid currency value. It should be "54,954.00" with the first period being a comma.

However, that's just a single character off from being the right result. We could build an extractor to return currency values looking to make fuzzy swaps like this, both matching text that is slightly off and reformatting the result to match a valid currency format. If we used that extractor as the Labeled Value extractor's Value Extractor it would not only find the segment but also reformat the result, swapping the mis-OCR'd period for what it should be, a comma.

And we've done just that.

  1. Here, we've set the Labeled Value extractor's Value Extractor to reference a Data Type returning fuzzy matched currency values.
  2. The Value Extractor matches the text we want, below the label Amount Due
  3. And since the referenced extractor uses Fuzzy RegEx the returned result is now a valid currency value.
    • The result is now "54,594.00" instead of "54.594.00". The first period was swapped for a comma.

Click me to return to the top

Additional Considerations When Using Labeled Value with Label Sets

Custom Labels to Exclude Results

Continuing from the tutorial above's discussion of an unconfigured Labeled Value Value Extractor', let's examine the results of the "Purchase Order Number" Data Field.

  1. We've selected the "Purchase Order Number" Data Field in the Node Tree.
  2. The Data Field's Value Extractor property is set to Labeled Value.
  3. It currently does not have the Labeled Value extractor's Value Extractor configured.
  4. Left unconfigured, we get an undesirable result, a rather large text segment "Order Date Customer No. Salesperson Order No. Ship Via".

This is obviously not what we want. We want the purchase order number listed below it. Ultimately, we will follow best practice and configure the Labeled Value extractor's Value Extractor property.

However, before we do, this gives us an opportunity to demonstrate some additional functionality of the 'Labeling Behavior.

This data "Order Date Customer No. Salesperson Order No. Ship Via" is itself comprised of labels pointing to various values on the document. Even though we haven't set up Data Fields in this Data Model to capture the values they point to, we know this is data we don't want. In general, you don't want to use Grooper to extract labels, you want to extract values.

What's happening here is Grooper is returning all the text on this single line until a collected label in this Document Type's label set is located. In this case, the label Terms was collected for the "Payment Terms" Data Field. None of the text between the label PO Number and the label Terms have been collected in the label set. So, the Labeled Value extractor returns all the text to the right of the "PO Number" Data Field's label (PO Number) and the next encountered label (Terms), resulting in "Order Date Customer No. Salesperson Order Number Ship Via".

This is very specific functionality to the Labeled Value extractor and its interaction with label sets. It will only behave this way if you:

  1. Are using the Labeling Behavior and the Data Field's Value Extractor is set to Labeled Value.
  2. Have collected other labels on the same line as the Data Field's label.
  3. Have not configured the Labeled Value extractor's Value Extractor.

This may be clearer if we add a Custom Label to the label set.

  1. Here we've added a Custom Label Salesperson to the "Purchase Order Number" Data Field's parent Data Section's labels.
  2. Be aware, the Custom Label must be added to the Data Field's parent Data Element's labels in order for this to work. This will be either a Data Section if it is a child of a Data Section or the Data Model itself if it is not.
    • In this case, the "Purchase Order Number" is a child of the "Static Fields" Data Section. This is why we added the Custom Label to the Data Section's labels and not the Data Model.
  3. Now we have both a Salesperson label and a Terms label for this Document Type's label set.

  1. Now, examine the difference in the "Purchase Order Number" Data Field's extraction result.
  2. It stops at the Custom Label we added, Salesperson
  3. Ultimately, returning everything between the Data Field's label PO Number and the next label to the right Salesperson
    • In other words, "Order Date Customer No."

FYI

Keep in mind this is very specific functionality to the Labeled Value extractor and its interaction with label sets. It will only behave this way if you:

  1. Are using the Labeling Behavior and the Data Field's Value Extractor is set to Labeled Value.
  2. Have collected other labels on the same line as the Data Field's label.
  3. Have not configured the Labeled Value extractor's Value Extractor.

If we were to go one step further and add a Order Date Custom Label, we wouldn't get any result returned at all!

There is no text between the Data Field's label and another label in the label set, the Labeled Value extractor will return absolutely nothing at all.

FYI

One last time, for emphasis...

Keep in mind this is very specific functionality to the Labeled Value extractor and its interaction with label sets. It will only behave this way if you:

  1. Are using the Labeling Behavior and the Data Field's Value Extractor is set to Labeled Value.
  2. Have collected other labels on the same line as the Data Field's label.
  3. Have not configured the Labeled Value extractor's Value Extractor.

HOWEVER, this was not the right solution for this problem.

This was only an educational exercise to make you aware of how labels in a label set interact with the Labeled Value extractor when its Value Extractor is left unconfigured.


We should have followed our best practice advice and configured the Labeled Value extractor's Value Extractor. We did not really have to go through the trouble of adding a bunch of Custom Labels. With the Labeled Value extractor's Value Extractor configured, it's going to ignore this whole business of finding a nearby segment or returning text on a line up to the next label in a label set and more specifically return the data you want to target.

  1. Here, we have the Labeled Value extractor's Value Extractor configured to reference a Data Type returning various purchase order number formats.
  2. Even without adding all the extra Custom Labels, we get what we want. The "Purchase Order Number" Data Field collects the purchase order number on the document, "PO009845", upon testing extraction.

However, there can be a reasonable solution if you cannot use a Value Extractor and must leave it unconfigured to capture a more generic text segment. This will require the use of "Data Element Overrides". For more information, visit the #Data Element Override Utility section of this article.

Click me to return to the top

Maximum Noise

The Maximum Noise property of the Labeled Value extractor controls the maximum number of "noise characters" allowed in the "bounding-region" of a label-value pair.

Now, what does that mean? Let's look at an example, using the "Remit Address" Data Field of our example Data Model.

  1. We've selected the "Remit Address" Data Field.
  2. The Data Field's Value Extractor is set to Labeled Value.
  3. The Labeled Value extractor's Label Extractor is left unconfigured.
    • The extractor will use the collected label for this Data Field for each Document Type.
  4. The Labeled Value extractor's Value Extractor is configured to reference a Data Type returning all addresses for this document set.
    • We've followed best practice here and assigned a Value Extractor. There's nothing wrong with the referenced Data Type (named "VAL - Address"). It returns the street address and city, state, zip code line for all addresses on these invoices.
  5. What we should get upon extracting the document is this:
    91 Vahlen Plaza
    Reston, VA 20191
  6. However, upon testing extraction. No result returns.

What gives? It has to do with these "noise characters" mentioned above.

Noise characters are any letters and digits falling within the bounding region defined by a label value. For our example, the bounding region looks like this.

  1. The label, highlighted in blue, is established by the Labled Value extractor's Label Extractor result.
  2. The value, highlighted in green, is established by the Labeled Value extractor's Value Extractor result.
  3. The bounding region, highlighted in yellow, is the smallest rectangle which can enclose both the label and the value.

The noise characters are any letters or numbers within this rectangle other than the label or the value.

The highlighted characters in the image would be the noise characters for our example.

The Maximum Noise property allows you to configure how many of these non-label and non-value characters should exist in the bounding box.

You don't typically expect to find a bunch of text between a label and a value. The Maximum Noise property acts as an additional filter to avoid returning results too far away from the label. Where the Maximum Distance filters out results that are physically a set distance from the label, the Maximum Noise filters results that have lots of text between them and the label. The default being 5, there can be a maximum of 5 letter or number characters between the label and value.

However, in our case, we have more than 5. We have 15 ("FacturaTechnolo").

  • Note: Our case assumes we only want to capture the street address and the city, state, zip line, not the receiver's name.

FYI

Noise characters are only letters and digits.

Spaces, punctuation marks, and control characters are NOT considered noise characters, even if present in the bounding region.

With this in mind, all we need to do to the "Remit Address" Data Field to successfully collect the result at time of extraction is increase the number of allowable noise characters.

  1. Here, we've upped the Maximum Noise property to 25.
  2. Upon extraction, the Labeled Value counts the number of noise characters in the bounding region between the label and the value.
  3. If the number of noise characters is less than the Maximum Noise property's number, the result is returned.
    • 15 is less than 25. Therefore, the result is returned.

Click me to return to the top

Footer Labels

For Data Field objects, you can collect both a "Header Label" as well as a "Footer Label". As we've seen the Header Label is the text label for whatever field you're trying to extract. Essentially, the text label marks the beginning of the field's content.

The Footer Label is an optional label used to mark the end of the field's content. The Footer Label is useful when leaving the Labeled Value extractor's Value Extractor unconfigured. While it is still always considered best practice to configure the Labeled Value extractor's Value Extractor, there are certain types of data that are difficult to match with regular expression. For example, a person's name. In these types of situations where you must run the Labeled Value extractor without a Value Extractor, a Footer Label can often aid you in throwing out false positive or "junk" data.

The following example is manufactured to demonstrate this concept. Let's say we're using Label Sets to extract the "Settlement Agent".

We would create a Data Field and collect a label for Settlement Agent. We would then set that Data Field's Value Extractor' to Labeled Value.

In the case of this document, we would get the result we wanted. The Labeled Value extractor's Label Extractor would match the collected label (stroked in blue). If left unconfigured, its Value Extractor would return the nearest segment to that label's location (according to its layout settings and operation discussed previously in this tutorial). This is exactly what we want, the highlighted name "Jourdain Meardon".

However, what if that value is not present on another document? Such is the case in this image.

In that case, the extractor is still going to look for the nearest segment. Depending on the layout settings, you might return "Seller" or you might return "File #". Both of those are segments. However, they are both the wrong result. The correct value in this case is nothing at all.

With a Footer Label, we can change how the Labeled Value operates when its Value Extractor is left unconfigured.

If we collect Seller for the "Settlement Agent" Data Field's Footer Label (stroked in red), we will restrict Labeled Value to only return text between the Header and Footer Labels (highlighted in yellow). With no text falling between the header and footer, the false positives will not return. In fact, no value will return at all!

Here, we've tested extraction with only the Header Label assigned for the "Settlement Agent" Data Field.

  1. The Data Field's Value Extractor is set to Labeled Value
  2. The Label Extractor is unconfigured.
  3. Left unconfigured, it matches the collected label on the document: "Settlement Agent"
  4. The Value Extractor is unconfigured.
  5. Left unconfigured, it matches the first characters horizontally aligned with the label, up to the Maximum Distance set (in this case the default of 2in).
  6. This returns "Sell".

This is junk data. There is no settlement agent listed on the document. No value should be returned.

We will add a Footer label to prevent this junk data from returning.

  1. Navigate to the Content Model in the Node Tree.
  2. Select the "Labels" tab.
  3. Select the Data Element to which you wish to add a Footer Label.
  4. Select the "Footer" tab.
  5. Collect the label as discussed in the Collect Label Sets section of this article.
    • In this case, the label Seller.

If present on the document, we expect the settlement agent's name to be between the label Settlement Agent (the Header Label) and Seller (the Footer Label).

When we test extraction for the "Settlement Agent" Data Field now, we get very different results.

  1. With a Footer Label added, and the Labeled Value extractor's Value Extractor unconfigured...
  2. ...the extractor will only return text between the Header Label and the Footer Label.
    • In our case, only text between Settlement Agent and Seller.
  3. With no text falling between the Header and Footer Labels, nothing is returned.

This is very specific functionality to the Labeled Value extractor and its interaction with Label Sets. It will only behave this way if you:

  1. Are using the Labeling Behavior and the Data Field's Value Extractor is set to Labeled Value.
  2. Have collected both a Header Label and a Footer Label for the Data Field.
  3. Have not configured the Labeled Value extractor's Label Extractor or Value Extractor.

Click me to return to the top

Using Static Labels for Data Field Extraction

Collecting Static Labels

The Data Field elements have a unique label option, the Static label. This label option is useful for situations where the label itself is what you want to extract.

  1. For example, we have a Data Field in this Content Model's Data Model to collect the vendor's name for the invoice.
    • However, there isn't necessarily a label for the vendor's name like there is for other data points on the document.
  2. The purchase order number has a label, "PO Number", pointing to that data, "PO0009845".
  3. However, there is no such label for this invoice document's vendor name, "Factura Technology Corp".
    • But that's the data we want. The name itself. If these invoices are always classified as "Factura" Document Types, they're always going to have this text, "Factura Technology Corp". That's the vendor's name, and that's the data we want.

What we really want to do is collect a piece of information that is the same for every single document of one Document Type. We expect the vendor's name "Factura Technology Corp" to be present for every document assigned the "Factura" Document Type during classification. Furthermore, we always expect it to be "Factura Technology Corp" and not something else.

Therefore, the vendor's name is "static" for the Document Type. It's present on every Document Type and the same value for every Document Type. You know what else is static on structured and semi-structured forms? Labels! Just in this case the label "Factura Technology Corp" is itself the value we want to return.

This is what a Static label is for.

  1. To add a Static label, select a Data Field in the Document Type's Data Model.
    • Here, we've selected a "Factura" document in the Batch and have selected the "Vendor Name" Data Field.
  2. Select the Static tab.
    • FYI: Only the Data Field Data Element has the option for a Static label.
  3. Collect the label you wish to collect for the Data Field.
    • Using one of the three label collection methods: 1) Type it into the text editor. 2) Lasso the label on the Document Viewer with the "Select Region" button. 3) Double-click the label segment on the Document Viewer with the "Select" button.

Click me to return to the top

Returning the Static Label

Now that the Static label is collected, how does Grooper know to return it during extraction when the Extract activity runs? The short answer is the Labeled Value extractor type will do this for us.

With "Factura Technology Corp" collected as a Static label, and the "Vendor Name" Data Field configured to utilize the Labeled Value extractor, it will return the Static label itself as the result.

  1. Here, we have the "Vendor Name" Data Field selected in the Node Tree.
  2. The Data Field's Value Extractor property is set to use the Labeled Value extractor type.
  3. The Labeled Value extractor's Label Extractor and Value Extractor are both unconfigured.
  4. With this Labeled Value configuration, and a Static label collected for this Data Field, the Static label is itself what the extractor is looking for on the document.
  5. If present, it will be returned and collected at time of extraction when the Extract activity runs.

Click me to return to the top

Label Sets and the Label Match Extractor Type

About Label Match

The Label Match extractor is extremely similar to the List Match extractor in that it matches one or more items in a defined list. However, it is designed specifically to work with the Labeling Behavior functionality. It will use the fuzzy extraction and vertical and constrained wrapping settings defined on the Content Model if a Labeling Behavior is enabled. This way, you can have a single, unified set of fuzzy match settings for multiple extractors. Rather than configuring these settings, including the confidence score threshold and fuzzy weighting, for multiple extractors, you can configure them just once when enabling the Labeling Behavior and all Label Match extractors will use them.

  • For more information on fuzzy extraction, visit the Fuzzy RegEx article.

For the Label Match extractor to return a result, two conditions must be met.

  1. The document folder must be classified.
    • In other words, it must have a Document Type assigned to it.
  2. That Document Type must have a Labeling Behavior enabled.
    • Either on the Document Type or, more typically, its parent Content Model.

Label Match Example

  1. In this example, a Value Reader is configured to return a small list of field labels on an invoice, using the Label Match Extractor Type
  2. Label Match is selected as the Extractor Type
  3. The list is entered in the Local Entries editor (just like you do with the List Match extractor).
    • Or, you can reference a Lexicon of list items using the "Properties" tab.
  4. The Prefix and Suffix Patterns are entered here.
    • ^|[^\w] is the default Prefix Pattern.
    • $|[^\w] is the default Suffix Pattern.
  5. The document we have selected is classified as an "Invoice" Document Type.
  6. This is a Document Type in the Content Model with the Labeling Behavior enabled.
  7. Upon execution, notice some results are returned with a confidence below 100%.
    • This is due to the fuzzy matching settings configured from the Labeling Behavior. The Label Similarity property was set to 90%. Any items in the list with a fuzzy matching similarity score above 90% are returned. Any falling below 90% (for example the list item CALLER:) are not.
    • Note this means changing the Labeling Behavior settings will impact ALL Label Match extractors for the Content Model's Document Types.

Where are these Labeling Behavior settings again?

  1. The Content Model selected here, has enabled a Labeling Behavior.
  2. Labeling Behavior is enabled using the Behaviors property...
  3. ...and added using the collection editor seen here, as discussed earlier in this article.
  4. The Label Match extractor will use all the fuzzy extraction and text wrapping settings defined here.

Click me to return to the top

Use Label Sets for Tabular Extraction

Label Sets and the Tabular Layout Method

Label Sets and Tabular Layout

Many tables label the columns so the reader knows what the data in that column corresponds to. How do you know the unit price for an item on an invoice? Typically, that item is in a table and one of the columns of that table is labeled "Unit Price" or something similar. Once you read the labels for each column (also called "column headers"), you the reader know where the table begins (below the column headers) and can identify the data in each row (by understanding what the column headers refer to).

This is also the basic idea behind the Tabular Layout Extraction Method. It too utilizes column header labels to "read" tables on documents, or at least as the step number one in modeling the table's structure so that Grooper can extract data from each cell in the table.

Furthermore, using the Tabular Layout method, collected label sets using a Labeling Behavior can also be used to extract data from tables on documents. In this case, the labels collected for the Data Column children of a Data Table are utilized to help model the table's structure.

Once the column header locations are established, the next requirement is a way to understand how many rows are in the table. This is done by configuring at least one Data Column's Value Extractor property. Generally, there is at least one column in a table that is always present for every row in the table. If you can use an extractor to locate that data below its corresponding column header, that gives you a way of finding each row in the table.

And last there are a few other considerations you might need to make. Is every row in the table a single line or are the rows "multiline"? Do you need to clean up the data the Tabular Layout initially extracts for a column by normalizing it with an extractor? Do you need to establish a table "footer" to limit the number of rows extracted?

This tutorial will cover the basic configuration of the Tabular Layout Extraction Method using collected Label Sets and address a few of these considerations.

The basic steps will be as follows:

  1. Collect labels.
    • At minimum you must collect a header label for each Data Column child in the Data Table. We will also discus the benefits of collecting label for the full header row.
  2. Assign a Value Extractor for at least one Data Column.
    • We always expect to find a quantity for each line item in the invoice. There's always a "Quantity" column. This data is also present on every row. This will provide the information necessary to find each row in the table.
    • We will also discus why you might configure the Value Extractor property on additional Data Columns as well.
  3. Set the Data Table object's Extract Method property to Tabular Layout
  4. Test to ensure the table's data is collected.

In a perfect world, you're done at that point. As you can see in this example, we've populated a table. Data is collected for all four Data Columns for each row on the document.

However, the world is rarely perfect. We will discuss some further configuration considerations to help you get the most out of this table extraction method in the "Additional Considerations" section below.

Click me to return to the top

Collect Labels

See the above how to (Collect Label Sets) for a full explanation of how to collect labels for Document Types in a Content Model. The following tutorial will presume you have general familiarity with collecting labels.

As far as strict requirements for collecting labels for tabular data extraction goes, you must at minimum collect a label for each Data Column you wish to extract.

For this "Stuff and Things" Document Type, one column header label has been collected for each of the four Data Column children of the "Line Items" Data Table.

  1. The label Quantity for the "Quantity" Data Column
  2. The label Description for the "Description" Data Column
  3. The label Unit Price for the "Unit Price" Data Column
  4. The label Total for the "Line Total" Data Column

You may optionally collect a label for the entire row of column header labels. This label is collected for the parent Data Table object's label.

  1. The label Quantity Item Serial Number Description Unit Price Total for the "Line Items" Data Table

It is generally considered best practice to capture a header row label for the Data Table. But if it's optional, why do it? What is the benefit of this label?

The answer has to do with imperfect OCR text data and Fuzzy RegEx. Fuzzy RegEx provides a way for regular expression patterns to match in Grooper when the text data doesn't strictly match the pattern. The difference between the regex pattern Grooper and the character string "Gro0per" is just off by a single character. An OCR engine misreading an "o" character for a zero is not uncommon by any means, but a standard regex pattern of Grooper will not match the string "Gro0per". The pattern expects there to be an "o" where there is a zero.

Using Fuzzy RegEx instead of regular regex, Grooper will evaluate the difference between the regex pattern and the string. If it's similar enough (if it falls within a percentage similarity threshold) Grooper will return it as a match.

  • FYI "similarity" may also be referred to as "confidence" when evaluating (or scoring) fuzzy match results. Grooper is more or less confident the result matches the regex pattern based on the fuzzy regex similarity between the pattern and the imperfect text data. A similarity of 90% and a confidence score of 90% are functionally the same thing (One could argue there is a difference between these two terms when Fuzzy Match Weightings come into play, but that's a whole different topic. And you may encounter Grooper users who use the terms "similarity" and "confidence" interchangeably regardless. Visit the Fuzzy RegEx article if you would like to learn more).

So how does this apply to the Data Table's column header row label? The short answer is it provides a way to increase the accuracy of Data Column column header labels by "boosting" the similarity of the label to imperfect OCR results.

  1. For example, examine the collected label for the "Description" Data Column.
    • Notice the label Description is highlighted red. The label doesn't match the text on the document.
  2. This is due to imperfect OCR results.
    • The label should read "Description" but OCR made some missteps and recognized that segment as "DescripUon".
    • The "ti" in "Description" were recognized as a capital "U". This means "Description" is two characters different from "Description" or roughly 82% similar. The Labeling Behavior's similarity threshold is set to 90% for this Content Model. 81% is less than 90%. So, the result is thrown out.
      • FYI, this threshold is configured when the Labeling Behavior is added using the Behaviors property of a Content Model. The Label Similarity property is set to 90% by default, but can be adjusted at any time by configuring the Labeling Behavior item in the Behaviors list.

As we will see, capturing the full row of column header labels will boost the similarity, allowing the label to match without altering the Label Behavior's fuzzy match settings.

First, notice what's happened when we lassoed the row of column header labels.

  1. Some of the labels are off. "oty." should read "Qty." and "DescripUon" should read "Description".
  2. It's because that's what's in the document's text. When you lasso a label, it's going to grab whatever OCR text data was generated from the Recognize activity (or native text for digital documents).
  3. And, our "Description" Data Field's label still isn't matching.
    • But keep your eye on the birdie.

  1. Notice what happens when we spell-correct the lassoed label, typing "Qty." instead of "oty." and "Description" instead of "DescripUon".
  2. Now the label matches. MAGIC!

Not magic. Just math.

The Data Table's column header row label is much much longer than a single Data Column's column header label. There are just more characters in "Qty. Qty. Item Number Description Unit Price Extended Price\r\nOrd. Shp." than "Description" (70 vs 11). Where the "Description" Data Column's label is roughly 82% similar to the text data (9 out of 11 characters), the "Line Item" Data Table's label, comprised of the whole row of column labels, is roughly 96% similar to the text data (67 out of 70 characters).

Utilizing a Data Table label allows you to hijack the whole row's similarity score when a single Data Column does not meet the similarity threshold. If the label can be matched as a part of the larger whole, its confidence score goes up much further than by itself. The Data Table's larger label of the full row of column labels gives extra context to the "Description" Data Column's label, providing more information about what is and is not an appropriate match.

So why is it considered best practice to capture a label for the Data Table? OCR errors are unpredictable. The set of examples you worked with when architecting this solution may have been fairly clean with good OCR reads. That may not always be the case. Capturing a Data Table label for the column label row will act as a safety net to avoid unforeseen problems in the future.

Click me to return to the top

Assign a Data Column's Value Extractor

This step is all about row detection.

So far all we've done is established header column positions on each document. So, Grooper knows where the table "starts". But, that's not where the data is. The table's data is in the rows.

As it stands, Grooper doesn't know anything about the rows in the tables. It doesn't know the size of each row. It doesn't know what kind of data is supposed to be in the rows. Maybe most importantly, it doesn't know how many rows there are. Tables tend to be dynamic. They may have 3 rows on one document and 300 on the next. Grooper needs a way of detecting this.

Indeed, if we were to test extraction with just labels collected, we would not get any result whatsoever.

  1. FYI you can test data extraction directly from the Labels UI using the "Test" button.
  2. This will create a new "Results" tab, showing you a preview of the results the Extract activity collects from the selected document folder, as defined by its Document Type's Data Model.
  3. As you can see, we get no extraction results for the "Line Item" Data Table.

This is why we need a Data Column's Value Extractor property configured, to give the Extract activity an awareness of the rows beneath the column labels.

The key thing to keep in mind is this data must be present on every row. You'll want to pick a column whos data is always present for every row, where it would be considered invalid if the information wasn't in that cell for a given row.

In our case, we will choose the "Quantity" Data Column. We always expect there to be a quantity listed for the line item on the invoice, even if that quantity is just "1".

  1. We will select the "Quantity" Data Column in the Node Tree.
  2. We will configure the Value Extractor to return the numerical quantity listed for every line item on every row of the table.
    • We will keep this fairly simple for demonstration purposes, using a Pattern Match extractor.

This is the pattern we will use for the "Quantity" Data Column's Value Extractor.

  1. The regex is a fairly simple pattern to match generic quantities.
    • It'll match one to three digits with an optional decimal followed by zero to four digits. And, that must be surrounded by a space character before and after.
  2. As you can see, we get two results below the "Quantity" label. We should then get two rows when this table extracts.

We get a bunch of other hits as well. This is a very generic extractor matching very generic numerical data.

  1. Will this result present a problem? Will we get an extra row for its result?
    • No. That result is above the label collected for the Data Column. The Tabular Layout method presumes rows are below column labels. Any result above them will be ignored.
  2. What about results like these? Will this present problem?
    • The short answer is no. This result is misaligned with the "Quantity" Data Column's header. It's too far to the right to be considered "under" it and will be ignored as a candidate to produce a row.
    • That said, when you are building your own Data Column extractors, do pay more attention to results below the column header row. They have the most potential to produce false positive results, producing erroneous rows.

For fairly simple table structures we now have the two things the Tabular Layout method needs to extract data.

So far, we have:

  1. Collected labels for the Data Column labels (and optionally the whole row of column labels for the Data Table)
  2. Configured at least one Data Column with its Value Extractor configured.

Now, all we need to do is tell the Data Table object we want to use the Tabular Layout method. We do this by setting its Extract Method property to Tabular Layout.

Click me to return to the top

Set Extract Method to Tabular Layout and Test

A Data Table's extraction method is set using the Extract Method property. To enable the Tabular Layout method, do the following.

  1. Select a Data Table object in your Data Model.
    • Here, we've selected the "Line Items" Data Table.
  2. Select the Extract Method property.
  3. Using the dropdown menu, select Tabular Layout

Now, let's test out what we have and see what we get!

  1. For the selected document folder in the "Batch Viewer" window...
  2. Press the "Test Extraction" button.
    • Side note: We've seen before we can test extraction using the "Labels" tab of a Content Model or Document Type when Labeling Behavior is enabled. The only real difference is we're testing extraction for the specific Data Element selected in the Node Tree. In this case the "Line Items" Data Model. The "Test" button in the "Labels" tab will test extraction for the entire Data Model and all its component child Data Elements. However, feel free to test extraction at either location. The end result is the same. We're testing to verify extraction results.
  3. The results show up in the "Data Element Preview" window.

For the Tabular Layout method, the Data Table is populated using primarily two pieces of information.

  1. The location and width of the Data Column header labels.
    • This determines the width of the cells for each column.
    • Side note: The width of the column cells is actually determined differently depending on if the table has lines. If the table has lines (as it does in this example) and those lines were previously detected via a Line Detection (or Line Removal) IP Command, the cell width will be expanded to the boundaries of the lines. Table lines give human readers an indicator of where the data "lives" (or is contained). If it's in the box, it belongs to the column. If it's out of the box, it belongs to a different column.
  2. The number of rows as determined by the Data Columns whose Value Extractor property is configured.
    • One row is established for each result the Value Extractor returns.

With these pieces of information, the Tabular Layout method can start to determine the table's structure. If you know where the columns are and how big they are, and you know how many rows there are, you pretty much know what the table looks like.

This allows Grooper to create data instances for each cell in the table.

  1. Once the Tabular Layout method establishes the boundaries of each cell, Grooper "knows" where the table data is located on the page.
  2. The text data (either OCR'd text or native digital text obtained from the Recognize activity) is extracted from each cell instance, populating the Data Table and collecting these results when the Extract activity runs.
    • This is for extremely basic configurations, there are some more advanced configuration techniques to either adjust the size of the cell instances and/or extract data for each cell. Some of these will be discussed in the #Additional Considerations section below.

Click me to return to the top

The basic setup is great for simple table structures. It is much easier to set up than previous table extraction methods, such as Header-Value. However, the biggest challenge in table extraction is the variety of ways in which table data can be formatted, organized into complicated row and column structures.

The Tabular Layout method offers a smorgasbord of configurability for more complicated or nuanced table structures. For more information on Tabular Layout's advanced setup, please visit the full #Tabular Layout article.

Label Sets and the Row Match Method

The Row Match table extraction method uses an extractor to pattern match each row of a table. For each result the extractor returns, the Data Table will collect one row. So, if the extractor returns forty results, you'll end up with a table with forty rows. Data Column results are then populated by filtering the data within each row to the proper column. Commonly, an array based collation method will be used to return the full row, then elements of that array will form the column results.

Label Sets can also be used in conjunction with the Row Match method. The Data Table's Header and/or Footer Labels can be leveraged to narrow where the Row Extractor executes on the document.

For example, take this Closing Disclosure form. This table structure could be targeted using the Tabular Layout method. However, it is more easily targeted and returned using the Row Match Method. The only potential problem is differentiating between the different sections of tables, such as "A. Origination Charges" and "B. Services Borrower Did Not Shop For" and "C. Services Borrower Did Shop For".

The row structure of each of these highlighted tables is similar enough (or identical in the case of the "B" and "C" tables) that a single extractor could easily produce false positive matches.

However, the labels on this document clearly define where each table begins and ends.

How do you the reader know where the "B. Services Borrower Did Not Shop For" table starts? You read the label "B. Services Borrower Did Not Shop For". This is its header label.

How do you the reader know where the "B. Services Borrower Did Not Shop For" table ends? Once you find the label "C. Services Borrower Did Shop For", you know you're looking at a different table. This is the "B" table's footer label.

The Row Match method will utilize a Data Table's collected Header' and Footer labels to define the table's boundaries. The Row Extractor will only return row instances following the Header and/or before the Footer.

Example Row Extractor

  1. For this example, we will use the Content Model named "Row Match and Label Sets - Model"
    • This Content Model already has its Labeling Behavior enabled.
  2. We have selected, in the Batch Viewer, the Batch named "Row Match and Label Sets - Test Batch"
  3. We will use the Row Match Extract Method to populate this Data Table named "B. Services"
  4. Last, we will use the selected Data Type named "TBL-RM - B. Services Table" as our Row Extractor.

  1. This Data Type extractor uses Ordered Array Collation to return each row instance.
  2. The child extractors, returning each element of the array, will be mapped to their corresponding Data Column when the Row Match method executes.
  3. However, the problem is we're getting too many rows. These are the rows we want to collect.
  4. We do not want to collect these rows. They pertain to a different table's information.

  1. Selecting our Data Table named "B. Services" we can test our extractor's results.
  2. We have assigned the Data Type ("TBL-RM - B. Services") as our Row Extractor.
  3. Everything highlighted here is extraneous, false-positive data.
  4. We could narrow down our results by configuring a Header Extractor and/or Footer Extractor in the Row Match sub-properties.
    • However, in this case, we can do things more simply using Label Sets. Rather than building an extractor, all we need to do is capture a label.

Collecting Labels For Row Match

Using Label Sets, you can assign a Header and/or Footer label for a Data Table. The Row Match method will utilize these labels in the place of its Header Extractor and Footer Extractor, respectively.

  1. To collect these labels, we will first go back up to our Content Model
    • The Labeling Behavior is already enabled.
  2. Next, we will navigate to the "Labels" tab.
  3. We will select the Data Table object whose labels we want to collect.
    • Ours is named "B. Services"
  4. First, we will collect the Data Table's Header label.
    • This is where the table "starts". For the Row Match method, no row instances will be collected before this label is encountered on the document.
    • In our case we collected the text B. Services Borrower Did Not Shop For as our Header label.

More importantly, in our case, we need a Footer label. The Footer label will determine where the table "stops" on the document. For the Row Match method, no row instances will be collected after this label is encountered on the document. Any matches from our Row Extractor will be discarded after the collected label.

  • FYI: In the minor build from which this section was written (version 2021.00.0035), if you want to utilize a Footer label, you must collect a Header label as well.
  1. To collect a Footer label, navigate to the "Footer" tab.
  2. Collect the Data Table's Footer label.
    • In our case we collected the text C. Services Borrower Did Shop For as our Footer label.
  3. Don't forget to "Save" when finished.

That's it! That's all you need to do to establish the table's header and footer. There is no need to collect labels for the Data Columns. Collecting labels for Data Columns is only necessary for the Tabular Layout method.

I will repeat. The Row Match method will only utilize the Data Table's labels. If you collect labels for the Data Columns and you're using the Row Match method, they will do nothing as far as table extraction goes.

Enable Label Sets

The only thing left to do is "tell" the Row Match method you want to use the Header and Footer labels. This is done by enabling the Use Labelset property.

  1. Navigate to the Data Table object in the Node Tree Viewer.
  2. Expand the Row Match sub-properties.
  3. Expand the Options properties.
  4. Change the Use Labelset property from False to True.

Test Results

With the labels collected and the Use Labelset property enabled, our Data Table will properly collect the rows we want from this table.

  1. Upon testing extraction of the selected document folder...
  2. Our Row Extractor collects the desired rows.
  3. Only rows coming after the Header label and before the Footer label are returned.

Click me to return to the top

The Fluid Layout Method

The Fluid Layout table extraction method is designed to switch between the Tabular Layout method and the Row Match method, depending on how a Data Table's labels are configured. So, if you have a varied set of documents where Tabular Layout works well for some Document Types and Row Match works well for other Document Types, you may be able to use Fluid Layout for all of them, avoiding the need for Data Element Overrides.

Label Sets must be collected to use the Fluid Layout method. Each Document Type will use either Tabular Layout or a Row Extractor to collect table data depending on how the labels for a Data Table are collected. Therefore, you cannot utilize the Fluid Layout method without a Labeling Behavior enabled.

The Fluid Layout table extraction method is not only "Label Set aware", it is Label Set dependent.

For example, take these two versions of code descriptions from an EOB form.

Version 1 is clearly a table. It uses the labels "CODE" and "DESCRIPTION" to delineate between each column. The Tabular Layout table extraction method would handily extract this information, returning everything in the "CODE" column to one Data Column and everything in the "DESCRIPTION" column to another..

Version 2 is not exactly a table, but a Data Table could still use the Row Extract method to form a table from this list of information. Each bulleted item in the list could be returned as a table row. The code could be filtered into one Data Column and the description could be filtered into another.

You could not use the Tabular Layout method for this "table". There are no column labels present.

  • There is, however, a header label for the whole table "Code", which will be important for the Fluid Layout method.

So, we have a situation where the Tabular Layout or the Row Match method is preferable, depending on the document's layout. Next, we will review how to configure the Fluid Layout table extraction method to target both table structures.

Collect Labels

The first thing you will want to do is collect labels for your Data Table for each document type. How the labels are collected will determine which table extraction method the Fluid Layout method executes.

  • To execute the Tabular Layout method, the Data Table's Data Column Header labels must be collected.
    • Optionally, you may choose to collect a Header and/or Footer label for the Data Table.
  • To execute the Row Match method (also referred to as the Flow Layout), you must collect the Data Table's Header label. You may NOT collect labels for the Data Table's Data Column labels.
    • This will be how Grooper checks to see which extraction method is used for each Document Type. If Data Column labels are present, the Tabular Layout configuration is used. If no Data Column labels are present, but the Data Table's Header label is present, it will use the Flow Layout (i.e. Row Match) configuration is used.
    • Optionally, you may choose to collect a Footer label for the Data Table.
  1. We will use this Content Model named "The Fluid Layout Method - Model" for this exercise.
    • Its Labeling Behavior has already been enabled.
  2. We have navigated to the "Labels" tab to start collecting labels.
  3. We have selected this Batch named "The Fluid Layout - Test Batch".
  4. Notice we have two Document Types
    • "V1 - Tabular Layout" will correspond to the document whose code description is in a proper table with column headers for the "CODE" and "DESCRIPTION" columns.
    • "V2 - Row Match" will correspond to the document whose code description is in a bulleted list.
  5. The two document folders in the Batch have already been assigned the appropriate Document Type.

For Tabular Layout Document Types

The "V1" Document Type will utilize the Fluid Layout method's Tabular Layout configuration. To execute the Tabular Layout configuration, much like executing the Tabular Layout table extraction method in general, Data Column labels must be collected.

  1. We have selected the "V1 - Tabular Layout" document folder in the Batch.
  2. The Header labels for the "Code" and "Description" Data Columns have been collected.
    • CODE for the "Code" Data Column
    • DESCRIPTION for the "Description" Data Column

FYI

Just as is the case with the Tabular Layout table extraction method as a "stand alone" extraction method, when used with Fluid Layout, collecting a Data Table's Header label is optional.

That said, it is still generally considered best practice to collect a row of header labels using the Data Table's Header label, if possible.

  1. Here, we've collected a Header Label for the "Code Remarks" Data Table.

For Flow Layout Document Types

The "V2" Document Type will utilize the Fluid Layout method's Flow Layout configuration. This will utilize the Row Match method to return table data. To execute the Flow Layout configuration ONLY the Data Table's label must be collected.

  1. We have selected the "V2 - Row Match" document folder in the Batch.
  2. The Header label for the "Code Remarks" Data Table has been collected.
  3. DO NOT collect Data Column labels for Document Types you wish to utilize the Row Match method to collect data.

Configure Fluid Layout

Now that the labels are collected for our Document Types we can configure the Fluid Layout extraction method for our Data Table.

  1. Select the Data Table in the Node Tree Viewer.
    • We've selected our "Code Remarks" Data Table.
  2. Select the Extract Method property.
  3. Using the dropdown menu, select Fluid Layout.

Expanding the Fluid Layout sub-properties, you can see there are two Layout configurations.

  1. The Tabular Layout configuration will be applied to Document Types whose Data Column labels have been collected.
  2. The Flow Layout configuration will be applied to Document Types whose Data Column labels have NOT been collected, as long as the Data Table's Header label is present.

By expanding the Tabular Layout and Flow Layout properties, you can see their property panels are identical to the Tabular Layout and Row Match table extraction methods respectively.

  1. The properties you see here are the same set of properties you configure for the Tabular Layout method.
  2. The properties you see here are the same set of properties you configure for the Row Match method.

All that's left is to configure extraction logic for each of the Layouts.

Configure Flow Layout

The Flow Layout configuration extracts table data using the Row Match method. What do you need in order for Row Match to collect table data? A Row Extractor.

  1. In our Local Resources folder, we already have a Data Type that will return rows properly for our "V2" Document Type.
  2. Using the Row Extractor property, we've referenced the aforementioned Data Type.

For our purposes, that's all we need to do. For the "V2 - Row Match" Document Types this extractor will properly return each row and collect each columns data. We have no need to configure any of the other Row Match properties.

Configure Tabular Layout

The Tabular Layout configuration extracts table data using the Tabular Layout method. What do you need in order for Tabular Layout to collect table data? At least one Data Column's Value Extractor must be configured in order to detect each row in the table.

  1. We've selected the "Code" Data Column to configure.
  2. For its Value Extractor we've used a Pattern Match extractor.
  3. The Pattern Match extractor's Value Pattern is set to the regex \w+ and its Prefix Pattern is set to the regex \n.
  4. This will return one result for each row of the "CODE" column, effectively detecting all four rows present.

This is a fairly simple table with only two columns. Just configuring one Data Column's Value Extractor will be sufficient for our needs.

  1. If you need to configure any additional Tabular Layout settings, you can do so by selecting the Data Table in the Node Tree Viewer.
  2. Expand out the Tabular Layout properties and configure them as needed.
    • Again, this is a simple table with simple configuration needs. These default property configurations should be adequate to collect table data for the "V1 - Tabular Layout" Document Types.

Test Extraction

Now that extraction is configured for both the 'Tabular Layout and Flow Layout for our documents, Grooper will switch between the Tabular Layout table extraction method and the Row Match table extraction methods, depending on the Document Type.

For the "V1 - Tabular Layout" Document Type, Data Column labels were collected.

Therefore Grooper extracts the table using the Tabular Layout configuration.

For the "V2 - Row Match" Document Type, only the Data Table's Header label was collected, and no Data Column labels were collected.

Therefore, Grooper extracts the table using the Flow Layout configuration (using the Row Match method).

Click me to return to the top

Use Label Sets for Sectional Extraction

There are two Label Set aware Extract Methods for Data Sections.

  1. Transaction Detection
  2. Nested Table

The Transaction Detection method will be most applicable to the majority of use cases wanting to use labels to produce section instances. If you simply want to produce a section starting at a header label and ending at a footer label, the Transaction Detection method is what you want. However, this configuration of Transaction Detection is quite different from how it normally produces sections. We will go over how Transaction Detection establishes section instances both with and without Label Sets.

The Nested Table method is a much more niche section extraction method. It produces section instances using repeating tables, nested within each section. This can be a highly effective way to target sections for certain use cases, such as medical EOB (explanation of benefits) forms.

Label Sets and the Transaction Detection Method

About Transaction Detection

The Transaction Detection section extraction method is useful for semi-structured documents which have multiple sections which are themselves very structured, repeating the same (or at least very similar) field or table data.

For example, take this monthly tax reporting form from the Oklahoma Tax Commission.

There are five sections of information on this document listed as "A" "B" "C" "D" and "E". Each of these sections collect the exact same set of data:

  1. A "Production Unit Number" assigned to an oil or natural gas well.
  2. A "Purchaser/Producers Report Number"
  3. The "Gross Volume" of oil or natural gas produced
  4. The "Gross Value" dollar amount of oil or natural gas produced
  5. The "Qualifying Tax Rate" ultimately used to calculate the tax due for the well's production.
  6. And so on.

The Transaction Detection method looks for periodic similarity (also referred to as "periodicity") to sub-divide a document into multiple section instances.

For structured information like this, one way you can define where each section starts and stops is just by the patterns in the fields themselves. These values are periodic. They appear at set intervals, marking the boundaries of each section.

For example,

  1. The "Production Unit Number" is always found at the start of the section.
  2. The "Exempt Volume" is always found somewhere in the middle of the section.
  3. The "Petroleum Excise Tax Due" is always found at the end.

The Transaction Detection method detects the periodic patterns in these values to divide up the document into sections, forming one section instance from each periodic pattern of data detected. Part of how the Transaction Detection detects these patterns is by using extractors configured in the Data Section's child Data Field objects. These are called Binding Fields.

Grooper uses the results matched by these Data Fields to detect each periodic section. For example, you might have a "Production Unit Number" Data Field for these section that returns five results, one for each section. Once these five results are established, Grooper will look for other patterns around these results to establish the boundaries of each of the five sections.

The Transaction Detection method also analyzes a document's text line-by-line looking for repeated lines that are highly similar to each other.

For example, each of the yellow highlighted lines are extremely similar. They are essentially identical except for the starting character on each line (either "A" "B" "C" "D" or "E"), this repeated pattern is a good indication that we have a set of repeated (or "periodic") sections of information.

Furthermore, the next lines, highlighted in blue, are also similar as long as you normalize the data a bit. If you replace the specific number with just a "#" symbol, they too are nearly identical.

The Transaction Detection method will further go line-by-line comparing the text on each one to subsequent lines, looking for repeating patterns. Such is the case for the rest of the green highlighted lines. Even accounting for OCR errors, each line is similar enough to detect a pattern. We have 5 sets of very similar lines of text. We have ultimately 5 section instances returned for the Data Section.

Lastly, eventually Grooper will detect a line that does not fit the pattern. The red highlighted line is totally dissimilar from the set of similar lines detected previously. This is where Grooper "knows" to stop. Not fitting the periodic pattern, this marks a stopping point. This text is left out of the last section instance and with no further lines matching the detected periodic pattern, no further section instances are returned.

The Transaction Detection method is not going to work well for every use case. It succeeds best where most of the data in the section is numerical in nature.

It's easy to normalize numeric data. Any individual number can be swapped for a "#" symbol. A currency value in on a line of text one section could be $988,000.00 and $112,433.00 in another but as far as comparing the lines for periodic similarity (also referred to as "periodicity"), they can both be normalized as "$###,###.##". Lexical data tends to be trickier. How do you normalize a name for example? How do you differentiate a name from a field label? You can do it with a variety of extraction techniques, but not using this line-by-line approach to determining how similar one line is to another.

This precisely is why it's called "Transaction" Detection. It works best with transactional data, which tends to be currency, quantity or otherwise numerical values. Indeed, this method was specifically designed for EOB (Explanation of Benefit) from processing and medical provider payment automation, in general.

FYI

What does this have to do with Labeling Behavior and Label Sets?

We're getting there. Ultimately, Transaction Detection is "Label Set aware" and can take advantage of collected Header and Footer labels for a Data Section object. However, collecting labels for the Data Section will quite dramatically change how Transaction Detection works.

It is best to understand how this sectioning method works without Label Sets before we delve into how it works with them.

Configuring Transaction Detection with Binding Fields

Without utilizing Label Sets, the Transaction Detection sectioning method must assign at least one Binding Field in order to detect the periodic similarity among lines of text in a document, ultimately forming the Data Section's section instances.

  1. For this example, we will end up configuring the "Production Info" Data Section of this Data Model.
  2. We will utilize the "Production Unit Number" as the Binding Field.
  3. This Data Field utilizes a simple Pattern Match extractor for its Value Extractor assignment.
    • Which returns the production unit numbers on the document using a simple pattern \d{3}-\d{6}-\d-\d{3}
  4. Importantly, notice this returns five result candidates (when testing extraction at the Data Field level in the Node Tree).
    • This will be important because we want to end up creating five section instances. If you expect to return five section instances, your Binding Field's extractor (or Binding Fields extractors if using more than one) will need to return five results.

Next, we will configure a the "Production Info" Data Section to create section instances using the Transaction Detection method.

  1. Select the Data Section in the Node Tree.
  2. Using the Extract Method property, select Transaction Detection.
  3. Select the Binding Fields property.
  4. Using the dropdown menu, select which Data Fields in the Data Section should be used as Binding Fields by checking the box next to the Data Field.
    • Here, we have selected the "Production Unit Number" Data Field.

For this example, all we need to do is assign this single Data Field as a Binding Field. There is enough similarity between the repeating section, that's all we need to do (For more complicated situations you may need multiple binding fields. Just be sure all Binding Fields are present in each section. No "optional" Data Fields for the Binding Fields.

The Transaction Detection method will then go through the line-by-line comparison process around the Binding Fields to detect periodic similarities to produce section instances.

  1. How Grooper goes about detecting these periodic patterns is controlled by the Periodicity Detection set of properties.
  2. In our case, five section instances were established, one for the each result from the "Production Unit Number" Data Field's Value Extractor.

FYI
1. If you need to trouble shoot the Transaction Detection method's results, the "Diagnostics" tab can give you additional information as to how Grooper detected these repeating patterns in the document's text data.
2. You will find the following reports for the Data Section
1. Execution Log
2. Preprocessed Document
3. Labels
4. Periodicity Matrix

Configuring Transaction Detection with Label Sets

Now that we understand the basics of the Transaction Detection method, we can look at how this sectioning method interacts with the Labeling Behavior. Its behavior is wildly different if a Header label is collected for the Data Section. Assuming you can collect a Header label for the Data Section, it is so different that a Binding Field is not even necessary to produce the section instances.

Establishing the section instances is almost as simple as...

  1. Start the section instance at the 'Header label.
  2. Stop the section instance at the next Header label (or Footer label)
  3. Repeat for every Header label found on the document.

For example, we have collected a Header label for the "Production Info" Data Section here.

  1. To add the label, we've selected the Content Model in the Node Tree.
  2. We've navigated to the "Labels" tab.
  3. We've selected the document in the Batch classified as the "OTC Form 300" Document Type.
    • In other words, this is the Label Set for the "OTC Form 300" Document Type.
  4. We've selected the Data Section in the Data Model.
  5. For the Header label, we've captured the first line of field labels.
    • 8. Production Unit Number 9. Purchasers/Producers Report Number 10. Gross Volume 11. Gross Value
  6. Notice we have five hits for this label, one at the start of each section.

Next, we will configure a the "Production Info" Data Section to create section instances using the Transaction Detection method.

  1. Select the Data Section in the Node Tree.
  2. Using the Extract Method property, select Transaction Detection.
  3. Notice no Binding Fields are selected.
  4. But we still get the five section instances returned!
    • In fact, for this example, no further configuration was required other than collecting the Data Section's Header label and setting the Extract Method to Transaction Detection.
  5. The section instance starts on the line containing the Header label.
  6. And it ends the line before the next Header label.
    • Then the second section instance starts at the second header and so on.

Click me to return to the top

Label Sets and the Nested Table Method


The Nested Table Data Section Extract Method was specifically designed for a particular combination of sectional and tabular data found on medical EOB (Explanation of Benefits) forms (However, it may have application in other areas). These documents will often be broken up into sections of claims with information like a claim number and a patient's name followed by a table of information pertaining to services rendered and ending with some kind of footer, usually a total line adding up amounts in the table.

One way you can often identify where these claim sections start and stop are by the table's themselves. Essentially you'll have one table per claim. Or in Grooper's case, one Data Table instance per Data Section instance.

The Nested Table sectioning method takes advantage of these kinds of structures, utilizing a Document Type's Label Set to do so.

The Nested Table method has two hard requirements:

  1. The Data Section must have a child Data Table object.
    • Furthermore, The Nested Table Data Section Extract Method was designed to work best when its child Data Table uses the Tabular Layout Extract Method.
    • While it is possible for this Data Table to use any tabular extraction method, the Tabular Layout method is preferred.
  2. The section must have a "footer" able to be captured as the Data Section's Footer label.

Set Data Section's Extract Method to Nested Table

The Nested Table method is a little different in that it is a sectional extraction method but also involves tabular data. Ultimately, both a Data Section object and a Data Table object are required for it to work. However, it is primarily a method of breaking up a document into multiple sections for data extraction purposes. As such, it is a Data Section extraction method.

  1. Select a Data Section in the Node Tree.
  2. Set the Extract Method property to Nested Table.
  3. Using the Table property select the child Data Table to be used to establish the repeated sections in the document.
  4. In our case, we've selected the highlighted child Data Table named "Service Line Info".
    • This Data Table's extraction results, combined with the Data Section's Footer label we will collect label, will form the multiple section instances for the Data Section.
    • The Data Table must be a direct child of the Data Section.

Configure the Data Table

The Data Table should be configured to collect all table rows for the full document. When configuring the Data Table and testing its results, just ensure the table accurately extracts the full document as a single table. The Data Section (using the Nested Table method) will take care of breaking out the rows into the appropriate sections.

It is considered best practice for the child Data Table to use the Tabular Layout method.

The Nested Table method was designed specifically in mind with with a Data Table using the Tabular Layout table extraction method. While technically possible to use other table extraction methods, you will achieve the best results when the Data Table uses the Tabular Layout method.

  1. Select the Data Section's child Data Table.
    • If you have more than one child Data Table, make sure you select the Data Table referenced by the Nested Table method's Table property.
  2. Assign the Data Table's Extract Method and configure its extraction.
    • In our case we have enabled the Tabular Layout method, having already collected labels for the Data Table and its child Data Columns for the "Astro Insurance" Document Type.
  3. Press the "Test Extraction" button to ensure the Data Table collects all rows on the document.
  4. Note, we get one big table since we're testing on the Data Table object of our Data Model's hierarchy.
  5. Even though we technically have three distinct tables on the document in three sections, one for each patient.
    • This is good for now. This is what we want to verify, that we get the correct data populated for every table row. Sectioning out the document and placing each table in its own section will be performed by the parent Data Section using the Nested Table method.

FYI The general guidance for testing the child Data Table's extraction results is to verify every row on the document is returned as a single table.

However, there are some situations where you may not return every row but still get Nested Table to section the document appropriately and ultimately return each table to each established section.

You may run into this if your table uses a Footer label or extractor. To illustrate this, we added a Footer label for this Data Table, using the text label "TOTALS" at the bottom of each table.

  1. Notice we only return the first three rows for the first table on this document where before we were returning every table row for every table.
  2. That is because the Data Table stopped looking for rows once it encountered the Footer label TOTALS
  3. However, because that footer is present at the end of each table (which will ultimately inform the Nested Table method how to establish sections), when the Data Section executes, creating sections for this document, each subsequent table will actually successfully extract and populate the sections as seen in the subsequent tabs.

To sum up, in general, make sure your Data Table extracts every row for the whole document when testing your Data Table's configuration. However, if you have repeating footers, while you may only be able to verify the first section's table row's populated correctly, you will be able to verify the rest of the document's tables extract correctly whenever you execute the Data Section's extraction.

Add the Data Sections Footer Label

In order for the Nested Table method to properly section out the document, you must assign a Footer label in the Document Type's Label Set for the Data Section. This will give Grooper an awareness of where the section should stop (ultimately allowing another section to start). In our case, we can use the text label "Totals". At the end of every table/section there is this "Totals" line totaling up various columns in the table. Since this label is present at the end of every section, we can collect it as the Data Section's Footer label, which the Nested Table method will then use to establish where each section instance ends.

  1. To collect the Footer label, first navigate to the Content Model in the Node Tree.
  2. Select the "Labels" tab.
  3. Select the Data Section object in the Labels collection UI.
  4. Select the Footer tab.
  5. Collect the label.
    • See the Collect Label Sets section of this article if you need more information on how to collect labels.
The Footer label must be collected for the Data Section for the Nested Table method.

Even if the Data Table object collects and uses a Footer label for its own needs, the Data Section must have a Footer label defined as well (even if it's the exact same label).

Run Global Header Detection

If we are to test extraction at this point, we will see mixed results. The Data Section will correctly produce the three section instances for this document. However, the tabular data will not be collected.

  1. Selecting the Data Section in the Node Tree
  2. and pressing the "Test Extraction" button, we can verify these results.
  3. The Nested Table method populates three section instances, as desired.
  4. The Data Field extraction executes flawlessly.
    • Not only do we have the right number of section instances, but their dimensions are correct as well. They fully encapsulate the appropriate text data.
  5. However, the Data Table does not extract anything at all.

As a side note, the "Inspector" tab can be very helpful when troubleshooting extraction in general, but particularly Data Section and Data Table extraction.

  1. Select the "Inspector" tab.
  2. Expand the "Claim Section" Data Section result to inspect the individual section instance results.
  3. Select one of the section instances.
  4. You can visually see the dimensions of the instance in the "Image View" tab.
    • It is likely hard to see in this image. However, the section will be outlined in a red line.
  5. You can also use the "Text View" tab to see all the text data for the selected section instance.

If you were to select each section instance, you could verify at this point all three sections where established successfully and the text data for each table is present. It just wasn't extracted. Why not?

This has to do with where the sections instances are and where the Data Table and Data Column labels are.

  1. This is where the section instances where determined to be on the document.
    • And this is absolutely correct. This is what we wanted to happen.
  2. However, the Data Table and Data Column labels for this Document Type's Label Set fall outside of each of these sections.

This presents a challenge. The Tabular Layout table extraction method relies on these labels in order to extract the tabular data. As is, Grooper can't "see" outside of the section instances. If only Grooper could look up to the table and column labels, the table would extract with ease.

Luckily, there is a way for the Tabular Layout method to do just that, using the Run Global property.

  1. Navigate to the child Data Table object in the Node Tree.
  2. Expand the Tabular Layout properties.
  3. Expand the Header Detection properties.
  4. Change the Run Global property from False to True.

This will allow Grooper to detect Data Table and Data Column labels outside of the section instances. Perfect for what we're trying to do here.

What's going to happen when we test extraction now? Find out in the next tab!

Test For Success

With the child Data Table now using global header detection (by setting the Run Global property to True), it can look outside the section instances for the column header labels on the full document. Let's see how our sections extract now and if we get any table data populated.

  1. We're navigating back to the parent Data Section to test extraction.
  2. Upon testing extraction, the Nested Table method establishes the three section instances, as we've seen before.
  3. And now, the table data is extracted.

Success! The Run Global property method is extremely beneficial when trying to extract table data from multiple sections. Without it, Tabular Layout would not have any way of referring to the column header labels collected in the Label Set. With this property enabled, Tabular Layout can do something very atypical for sectional data extraction. It can look beyond a section instance's text data and refer to the full document (in this case to locate the Data Table and Data Column labels in the Label Sets).

FYI

You can utilize the Run Global property to this effect when using other Data Section Extract Methods as well, not just Nested Table.

However, do note this property is only available/applicable to Data Tables using the Tabular Layout Extract Method.

Bonus Info: Hierarchical Tables and Peer Parent Labels

Click me to return to the top

Additional Information

Label Layout Options

As we've been collecting labels, you may have noticed the Layout property change from Simple to Tabbed or Wrapped. The Layout property determines how the label's text is presented on the document. The Layout can be one of the six following options:

  • Simple
  • Tabbed
  • Substring
  • Boxed
  • Wrapped
  • Ruled

When collecting labels in the Labels tab, Grooper will automatically detect the appropriate label layout. However, there may be some circumstances where you need to manually select the label's layout. Next, we will describe each of these Layout options.

Simple

The Simple Layout is by far the most common. Most fields on a document will utilize this layout. These labels consist of words that do not cross "segment boundaries", meaning the words themselves are not separated by large amounts of whitespace like tabs or by terminal characters like a colon (as a colon character often marks the end of a label).

  1. We will use this Content Model named "Label Layouts - Model" which already has a Labeling Behavior enabled.
  2. We will use these Data Fields to capture various labels and describe their corresponding Layout specifications.
  3. We will use this Batch named "Label Layouts - Test Batch".
  4. For the "Simple" Data Field we captured this portion of text.
  5. Grooper automatically detected its Layout as Simple.
  6. Notice the text here was not returned as a label.
    • Even thought the words are the same Im a Simple Label, its layout is different. There is a large gap between Simple and Label. The Simple Layout does not permit this. Hence, only the label to the left is returned. Only the label whose words do not cross a segment boundary, like a tab, are returned.
    • For simple fields, this makes sense. You don't expect a single label to cross these segment boundaries. Large gaps in whitespace or terminal characters, like colons, are used to distinguish one label from the next. The Simple Layout utilizes this affect of document structure to toss out labels whose words do cross these boundaries, preventing false positive label matches.

Tabbed

The Tabbed Layout is used for situations where you do want to cross segment boundaries. Think about capturing a table's row of header labels. Often each column's label will be separated by large amounts of whitespace. The Simple Layout would not permit you to capture the table's header but Tabbed will.

  1. For the "Tabbed" Data Field, we captured this portion of text.
    • Notice the large gap between Tabbed and Label. However, the label still matches.
  2. This is because Grooper automatically detected this as a Tabbed Layout.
  3. Notice this text also returned, even though it does not have large whitespace gaps between any of its words.
    • The Tabbed Layout does not expect words to cross segment boundaries, it merely permits them. You can think of this Layout as a more permissive version of Simple. Tabbed doesn't care whether there's segment boundaries between labels, whereas Simple does.

Substring

The Substring Layout is intended for circumstances where a label is bookended between other portions of text. In other words, it is a "substring" of a larger string of text.

  1. For the "Substring" Data Field, we captured this portion of text.
  2. Notice Grooper automatically detected the layout as Simple.
  3. Furthermore, this label does not match, even though its text Substring Label: is identical.
    • This is a good example of a substring. There are portions of text before and after it (in both cases the word Value). This prevents the label from being returned, using the Simple Layout.

This is a situation where we would want to manually assign the label's Layout, if we want to collect substrings as labels.

  1. To do this, we will select the Layout property.
  2. Using the dropdown menu, we will select Substring.
  3. With the Data Field's label using the Substring Layout, the substring now matches.
  4. Furthermore, notice both the substring and the simple string match, using the Substring Layout.
    • Again, you may think of this as a more permissive version of the Simple Layout.

Wrapped

The Wrapped Layout will return labels that wrap full lines of text on a document. So, if a label starts on one line, then continues on one or more lines after, this layout will successfully return it. The Wrapped Layout was also useful when we were collecting table labels for the entire header row. For those tables who had column headers on multiple lines, this layout was most appropriate to return the whole row of column headers.

  1. However, the Simple Layout will not work to capture this portion of text as a label.
  2. Normally, Grooper would capture this text as a Wrapped 'Label, but we manually assigned it the Simple Layout.
  1. For the "Wrapped" Data Field, we captured this portion of text.
  2. Grooper automatically detected this as a Wrapped label.
    • This is because the label wraps full lines. This line occupies one full line on the document, and wraps occupies the next.

  1. As a side note, if we manually change this Layout to Simple, the label will still return.
  2. The text matches in this case due to the Vertical Wrap property enabled on the Labeling Behavior, which allows the Simple layout to capture stacked labels..
    • See the Vertical Wrap article for more information on this property.

  1. The label will match when the Layout is set to Wrapped.
  2. The reason why is the first line So does is not stacked on top of the second this one. However, the first line does wrap to the next.

  1. Furthermore, the Wrapped Layout does permit segment boundaries between text segments, such as the tab space between about and this in this portion of text.

Ruled

Lines are used on documents to divide fields, sections, table columns or otherwise distinguish between one piece of information and another. Because of this, it is atypical to find a stacked label with a line between the first and second label. The Simple Layout respects this by preventing labels from returning if a horizontal line falls between any portion of the stacked label.

However, there may be rare circumstances where a horizontal line does fall between portions of the stacked label. In that case, you will want to use the Ruled Layout.

Line location information must be present in the Layout Data in order for Grooper to determine if a line is present. A Line Detection or Line Removal command must have been previously executed by an IP Profile during Image Processing or Recognize to obtain this information.
  1. For the "Ruled" Data Field, we captured this portion of text.
  2. Grooper automatically detected this as a Ruled label.
    • This is because there is a horizontal line between the first line of the stacked label This line and the second line rules.
  3. The text matches in both cases whether or not there is a horizontal line between any portion of the stacked label.

  1. If, however, we change the Layout to Simple, only the text without the line between the stacked label will return.
  2. Since there is a line present between the first and second line, the label no longer returns.

If you want to use the Ruled Layout option, you must enable the Vertical Wrap property of a Content Model

1. The Vertical Wrap property is the last configurable property in the Labeling Behavior property panel.

If you attempt to collect a Ruled label without Vertical Wrap enabled or attempt to change a Layout to Ruled without Vertical Wrap enabled, you will get an error message.

•  Again, keeping Vertical Wrap enabled is highly preferable. Only disable this property if absolutely necessary. Keeping Vertical Wrap enabled will avoid errors like this.
•  FYI: In older minor builds of version 2021, this will be an unhandled exception error message, reading "Object reference not set to an instance of an object", as seen in this screenshot.

Boxed

The Boxed Layout is intended to capture labels that wrap inside a box, enclosed in lines on all four sides. You can use this Layout to distinguish between labels that fall inside a box and those that do not when the Vertical Wrap property is disabled.

Line location information must be present in the Layout Data in order for Grooper to determine if a line is present. A Line Detection or Line Removal command must have been previously executed by an IP Profile during Image Processing or Recognize to obtain this information.
  1. For the "Boxed" Data Field, we captured this portion of text.
  2. With the default Labeling Behavior properties, Grooper automatically detected this as a Simple label.
  3. Both the label inside a box, and the label outside a box match.

You can differentiate between a label in a box and one outside a box by disabling the Vertical Wrap property.

  1. Here, we've gone into the Labeling Behavior property grid and set the Vertical Wrap property to Disabled
    • CAUTION! If you do this, vertical wrapping will be disabled for all labels and all label Layouts, not just for the Boxed Layout.

  1. Switching back to the Labels tab, our label now does not match at all.
  2. With Vertical Wrap disabled, the Simple Layout will not be able to match the stacked labels.

  1. However, if we change the Layout to Boxed, we will get a match.
  2. The label wrapped inside a box will match.
  3. The label that is not wrapped inside a box will not match.

If you want to use the Boxed Layout option in the manner described above, you must enable the Constrained Wrap property of a Content Model

1. The Constrained Wrap property is the next-to-last configurable property in the Labeling Behavior property panel.

If the Constrained Wrap property is disabled and Vertical Wrap is also disabled, you will not be able to return labels inside a box with the Boxed Layout.

Click me to return to the top of this tab

Data Element Override Utility

Earlier in this article, we talked about using the Labeled Value Extractor Type without configuring its Value Extractor. Again, it is considered best practice to configure its Value Extractor. However, sometimes data is difficult to pattern match. For example, crafting an extractor to return people or company names can be difficult to craft. It is truly these cases why the option to leave a Labeled Value extractor's Value Extractor unconfigured is an option with Label Sets.

To make the best use of this functionality, Data Element Overrides are typically necessary. Indeed, because the Label Set approach is more templated in nature, Data Element Overrides can be a useful tool to fine tune extraction for one specific Document Type. In this section, we will use the "Purchase Order Number" Data Field of our "Labeling Behavior - Invoices - Model" Content Model to demonstrate this.

Revisiting the Problem

The problem arose due to how the Labeled Value extractor behaves when its Value Extractor is left unconfigured. For some of our invoices, this didn't really present a problem at all.

  1. Here, we have the "Purchase Order Number" Data Field selected in our "Labeling Behavior - Invoices - Model" Content Model.
  2. The Data Field's Value Extractor is set to Labeled Value, as is appropriate to utilize the label collected for each Document Type in their Label Sets.
  3. We have re-set the Labeled Value extractor's Value Extractor. It is unconfigured.

For certain document layouts, this approach works just fine.

  1. Here, we have selected a "Rechnung" Document Type folder in the "Labeling Behavior - Invoices - Test Batch" Batch.
  2. Upon testing extraction...
  3. The correct value appropriately extracts.
    • This is due to the special functionality of the Labeled Value extractor when using Label Sets and leaving its Value Extractor unconfigured. The extractor will return text segments to the right of the Data Field's collected label.

However, this will not be the case for all document layouts, notably those whose labels are stacked vertically on top of their corresponding value.

  1. Here, we have selected a "Factura" Document Type folder.
  2. Upon testing extraction...
  3. We do not get the right value.
    • Without a Value Extractor configured, the Labeled Value extractor will return text segments to the right of the label, which, in this case, is not the right text data.
  4. Notice the Maximum Distance settings for this extractor. The extractor will return text results a maximum of 2 inches to the right and 2 inches below.
  5. So, why isn't it returning the actual purchase order number? It certainly falls within 2 inches below the label.
    • Again, this is due to the specialized way Labeled Value works without a Value Extractor configured. It will always return text data to the right of the label if any is present within the Right Maximum Distance setting.

However, we can easily get this extractor to return the actual purchase order number. All we have to do is tell it not to look to the right of the label.

  1. We can expand the Maximum Distance property.
  2. Clear the Right property value (or you could set it to 0 in as well).
    • This will ensure the Labeled Value extractor will only return text below the label, ignoring the text to the right of it.
  3. Test extraction.
  4. And we get the value we want.

But, what about our documents that did have the purchase order number laid out to the right of the label?

  1. If we go back to our "Rechnung" Document Type folder...
  2. ..and test extraction...
  3. Now, we get a result we don't want.

Data Element configurations are globally applied to all Document Types which inherit them. In our case, all our Document Types inherit the Content Model's Data Model (and its child Data Elements, such as our "Purchase Order Number" Data Field). Therefore, the changes we make to the "Purchase Order Number" Data Field's extractor will effect all documents of all Document Types. It's simply going to execute as we configure it, regardless which specific Document Type is extracted.

We're really in a situation where we want one Document Type to use one configuration and another Document Type to use a slightly different configuration. This is exactly what "Data Element Overrides" are for.

Data Element Override Basics

Before we get into setting up "Data Element Overrides", we will rewind a bit and set our Labeled Value extractor's Maximum Distance properties back to the default settings.

  1. We've reset the Maximum Distance properties to their default values.
  2. So we're back to square one. We are no longer getting the correct value for this "Factura" Document Type folder.

What we want to do here is change how these properties as configured for the "Purchase Order Number" Data Field are configured for the "Factura" Document Type and ONLY for the the "Factura" Document Type. "Data Element Overrides" allow us to do this by overriding a Data Element's property settings for a specific Document Type (in our case the "Purchase Order Number" Data Field for the "Factura" Document Type.).

"Data Element Overrides" are configured using the Document Type object to which they will be applied. We will thus configure an override for the "Factura" Document Type.

  1. Select the "Factura" Document Type in the Node Tree Viewer.
  2. Navigate to the "Overrides" tab.
  3. Here you will see all Data Elements the selected Document Type inherits.
    • Important! Overrides are configured in this UI for the Document Type selected in the Node Tree Viewer NOT the Batch Viewer.

  1. Select the Data Element whose properties you wish to override.
    • In our case, we want to change the property configuration of the "Purchase Order Number" Data Field.
  2. Navigate to the "Property Overrides" tab.
  3. What you see here is a duplication of the selected Data Element's property grid.
    • This is how the "Purchase Order Number" Data Field is currently configured. If we navigated back to that object in the Node Tree Viewer, we would see the exact same property configuration as we see here. For example, we can see the Value Extractor property is set to Labeled Value, just as it is on the object itself.

Using the "Property Overrides" UI, any property configuration we edit will only apply to the selected Document Type (in our case, the "Factura" Document Type).

  1. Expand the Labeled Value extractor's sub-properties
  2. Expand the Maximum Distance property's sub-properties.
  3. Clear the Right property's value (or set it to 0 in).

Now the "Purchase Order Number" Data Field will extract using these settings, only for the "Factura" Document Type.

  1. FYI: Any overridden Data Element will be underlined. Its text will also be blue once we navigate off this Data Element.

  1. If you have a "Factura" Document Type folder selected in the selected Test Batch...
  2. You test extraction for the overridden Data Elements
  3. Now we get extraction results using the overridden property configuration.

  1. Going back to the "Purchase Order Number" Data Field we can verify the override only effects the "Factura" Document Type.
  2. Selecting our "Rechnung" Document Type folder...
  3. ...and testing extraction...
  4. We get the result we want.
  5. The "Rechnung" Document Type has no overrides configured, and thus uses the Maximum Distance configuration as-is for the Data Field.

Data Element Overrides can be an effective way of fine tuning extraction logic specific to an individual Document Type. Because the Label Set approach is more templated in nature, each Document Type corresponds to one specific format, meaning the document's layout will be consisted for each folder classified as that Document Type. Many users will take advantage of this and leverage Data Element Overrides for various fields on various Document Types, especially when utilizing Label Sets.

There is a shortcut to configuring Data Element Overrides using the "Labels" collection UI, which we will demonstrate in the next tab.

Overrides & the Labels UI

In the previous tab, we taught you the normal way to configure Data Element Overrides for a Document Type. You can configure overrides in this manner whether or not you're using a Labeling Behavior in your Content Model. If you are using a Labeling Behavior, there is a shortcut to edit overrides for a Data Element. You can do it directly from the "Labels" tab, using the same UI you use to collect labels.

  1. Here, we have selected the "Labeling Behavior - Invoices - Model" Content Model.
  2. We have also navigated to the "Labels" tab.
  3. We need an override for the "Envoy" Document Type.
  4. Just like the "Factura" Document Type, these invoices present the purchase order number below the label and not to the right.

So, we need an override for the "Purchase Order Number" Data Field for the "Envoy" Document Type, which we can do without leaving the Labels UI.

  1. Select a Batch Folder assigned the Document Type whose overrides you want to edit.
    • In our case, we will keep selected this "Envoy" Document Type folder.
    • Important! Overrides configured in the Labels UI are configured for the Document Type selected in the Batch Viewer. Since you're not manipulating a selected Document Type object in the Node Tree, this is how Grooper "knows" which Document Type's overrides you are editing.
  2. In the "Labels" editor, double click the name of the Data Element whose property configurations you wish to overrides.
    • In our case, we want to edit the "Purchase Order Number" Data Field's override. So, we double click "Purchase Order Number"
  3. This will bring up a window to edit the double-clicked Data Element's overrides.
  4. Just like before, this is a duplication of the Data Element's property grid. Any adjustments you make to the Data Element will execute only for the Document Type selected.
    • For example, we can clear out the Labeled Value extractor's 'Right Maximum Distance property, forcing the Labeled Value to only "look" for text below the label.
    • FYI: You can override ANY property for ANY Data Element. For example, you could use a completely different extractor type for a Data Field for a specified Document Type.
  5. Press "Ok" to save the override configuration.

  1. Overridden Data Elements will be underlined and appear in blue text in the Labels UI as well.

Furthermore, you can test the override directly from the Labels UI as well. You can actually test extraction for the whole Data Model!

  1. Press the "Test" button to test the Data Model's extraction for the selected document folder in the Batch Viewer (including override settings for any Data Elements).

  1. This will create a "Results" tab. You will be presented with the extraction results for the selected document folder's Data Model.
  2. If there are any Data Elements with overrides, such as our "Purchase Order Number" Data Field, they will extract using the override configuration (just as they will when the document is actually extracted by the Extract activity in a Batch Process).

Click me to return to the top of this tab

Version Differences

2021

The Labeling Behavior is brand new functionality in Grooper version 2021. Prior to this version, its functionality may have been able to be approximated by other objects and their properties (For example, a Data Type using the Key-Value Pair collation is at least in some ways similar to how the Labeled Value Extractor Type works). However, creation of label sets using Document Types and their implementation described above was not available prior to version 2021.