Data Export - 2021

From Grooper Wiki
Jump to navigation Jump to search

Data Export is one of the Export Types available when configuring an Export Behavior. It exports extracted document data over a Data Connection, allowing users to export data to a SQL or ODBC compliant database.


Prior to version 2021, extracted data was exported using the Database Export activity. With the introduction of Export Behaviors in version 2021, this changed. Data is now exported to databases using the Export activity and one or more Data Export behavior definitions.


Asset 22@4x.png

You may download and import the file below into your own Grooper environment (version 2021). This contains Batches with the example document(s) and a Content Model discussed in this article

The most important goal of Grooper is to deliver accurate data to line of business systems that allow the information to be integrated into impactful business decisioning. Tables in databases remain, to this day, one of the main vessels by which this information is stored. Data Export is one of the main ways to deliver data collected in Grooper.

Data Export allows us to take a document...

...and its extracted data...

...and map that data to column locations in a database table.




There are three important things to understand when using and configuring Data Export to export data to a database:

  • The Export activity.
  • Data Elements
  • Data Connections

The Export Activity

Grooper's Export activity is the mechanism by which Grooper-processed document content is delivered to an external storage platform. Export configurations are defined by adding Export Type definitions to Export Behaviors. Data Export is the Export Type designed to export Batch Folder document data collected by the Extract activity to a Microsoft SQL Server or ODBC-compliant database server.

For more information on configuring Export Behaviors, please visit the full Export activity article.

Data Elements

Data Export is the chief delivery device for "collection" elements. Data is collected in Grooper by executing the Extract activity, extracting values from a Batch Folder according to its classified Document Type's Data Model.

A Data Model in Grooper is a digital representation of document data targeted for extraction, defining the data structure for a Content Type in a Content Model. Data Models are objects comprised of Data Element objects, including:

  • Data Fields used to target single field values on a document.
  • Data Tables and their child Data Columns used to target tabular data on a document.
  • Data Sections used to divide a document into sections to simplify extraction logic and/or target repeating sections of extractable Data Elements on a single document.

With Data Models and their child Data Elements configured, Grooper collects values using the Extract activity.

Depending on the Content Type hierarchy in a Content Model and/or Data Element hierarchy in a Data Model, there will be a collection, or "set", of values for varying data scope of a fully extracted Data Model's hierarchy. That may be the full data scope of the Data Model, including any inherited Data Elements inherited from parent Data Models. It may be a narrower scope of Data Elements like a child Data Section comprised of its own child Data Fields.

Understanding this will be important as Data Export has the ability to take full advantage of Grooper's hierarchical data modeling to flatten complex and inherited data structures. Understanding Data Element hierarchy and scope will also be critical when exporting data from a single document to multiple different database tables to ensure the right data exports to the right places.

Data Connections

Data Export uses a configured Data Connection object to establish a link to SQL or ODBC compliant database tables in a database and intelligently populate said tables. Once this connection is established, collected Data Elements can be mapped to corresponding column locations in one or multiple database tables. Much of Data Export's configuration is assigning these data mappings. The Data Connection presents these mappable data endpoints to Grooper as well as allowing data content to flow from Grooper to the database table when the Export activity processes each Batch Folder in a Batch.

Furthermore, not only can Grooper connect to existing databases using a Data Connection, but it can create whole new databases as well as database tables once a connection to the database server is established.

We discuss how to create Data Connections, add a new database from a Data Connection, and add a new database table from a Data Connection in the #Configuring a Data Connection tutorial below.

FYI We have improved integration with PostgreSQL, Db2, MySQL, and Oracle using the ODBC connection type in version 2021!

While not fully supported in previous versions of Grooper, you can now connect to these data sources with a Data Connection seamlessly in Grooper, allowing for full database export operations via Data Export.

How To

In the following tutorials, we discuss an an example of how to setup an Export Behavior, using Data Export.

In this example are a couple of different document formats whose data will be collected by a single Content Model. This will help illustrate two key distinctions mentioned above.

  • The first document (the "Employee Report) will demonstrate the flattening of a data hierarchy.
  • The second (the "Personnel Information Report") will give us an avenue to export to multiple tables at once.

The example documents have already been classified and their data extracted. Before we get into Data Export configuration specifics, there's a few things we need to understand about "the story so far"

  • The forms used in this example and how they are formatted.
  • The Content Model used in this example, its Document Types and how their Data Models represent the data we want to collect.
  • The extracted index data, which will be exported to a database.
Database export 001.gif
Asset 22@4x.png

You may download and import the file below into your own Grooper environment (version 2021). This contains Batches with the example document(s) and a Content Model discussed in this article

Understanding the Forms

Document 1: Employee Report

The thing to understand about this document is some of its data share a "one-to-many" relationship.

Some of the data is described as "single instance" data. These are individual fields like "Employee Last Name", "Employee First Name" and "Employee ID". For each document, there is only one value for each of these fields. These values are only listed once, and hence only collected once during extraction.

Some of the data, however, is described as "multi-instance" data. The "Earnings" table displays a dynamic amount of rows, for which there may be a varying number of data for its columns ("Code Desc", "MTD", "QTD", "YTD") depending on how many rows are in the table. There are multiple instances of the "YTD" value for the whole table (and therefore the whole document).

The single instance data, as a result of only being listed once on the document, will only be collected once, but needs to be married to each row of information from the table, in one way or another. The "one" "Employee ID" value, for example, pertains to the "many" different table rows.

This document is meant to show how to flatten data structures. While the single instance data is only collected once, it will be reported many times upon exporting to a database table.

Database export 002.png

Document 2: Personnel Information Report

The second document is essentially one big table of personnel information (name, address, email, phone number and the like).

While we ultimately want to collect data from all rows in this table, there are potentially two sets of information here. Some of it is generic personnel information, but some of it is "personally identifiable information" or PII. This information should be protected for legal reasons.

As a result, we will export collected data to two database tables (with the assumption that the second table is "protected".)

This document is meant to demonstrate how to export to multiple tables via one Export Behavior.

DISCLAIMER: The author of this article is not a lawyer.

For educational purposes we divided "Personnel Information" into two sets: "non-PII" and "PII"

•  non-PII: Employee ID, Phone Number, E-mail address, IP Address, Gender, ZIP
•  PII: First Name, Last Name, SSN, Street Number, Street Name, City, State.


This division was done purely for educational purposes to demonstrate a concept.

PII gets tricky in the real world. For example, an IP Address would not normally qualify as PII by itself, but could when combined with other personal information. Please consult your own legal department to determine what data you're collecting is PII and should be protected more securely.

Database export 003.png

Back to top to continue to next tab

Understanding the Content Model

The Content Model provided for this tutorial is named "Example Model - Data Export". This Content Model is designed to extract the data for these two different kinds of documents, each represented by its own Document Type.

The Employee Report Document Type

  1. The "Employee Report" Document Type' represents our "Employee Report" document.
    • This is the one we will use to demonstrate flattening a data structure that shares a one-to-many relationship on the document.
  2. It has its own child Data Model with Data Elements already configured for extraction.
  3. For the individual fields, represented once in the document, there are three corresponding Data Field elements:
    • "Last Name"
    • "First Name"
    • "Employee ID"
  4. For the tabular data, a Data Table named "Earnings" is established, with four Data Column elements, corresponding to the columns on the document:
    • "Code Desc"
    • "MTD"
    • "QTD"
    • "YTD"


The Personnel Info Report Document Type

  1. The "Personnel Info Report" Document Type represents our "personnel information" documents.
    • This is the one we will use to demonstrate exporting multiple table structures to multiple databases from a single document.
  2. It too has its own child Data Model with Data Elements already configured for extraction.
  3. The "non-PII" Data Table will extract data from each table row on the document for non-protected personnel information, as described by its child Data Column elements:
    • "Employee ID"
    • "Phone Number"
    • "EMail"
    • "Gender"
    • "Zip"
  4. The "PII" Data Table will extract data from each table row on the document for protected personnel information, as described by its child Data Column elements:
    • "Employee ID"
    • "First Name"
    • "Last Name"
    • "SSN"
    • "Street Number"
    • "Street Name"
    • "City"
    • "State"
  5. This set-up uses a single table extractor to collect each row on the document, but reports it to two different Data Table objects in Grooper (the "non-PII" Data Table and the "PII" Data Table). This will ultimately allow us to parse data from these columns, and place the PII related information into a separate database.


Back to top to continue to next tab

Verifying Index Data

Before the Database Export activity can send data, it must have data!

It's easy to get in the habit of testing extraction on a Data Field or a Data Model and feel good about the results, but it must be understood that the information displayed when doing so is in memory, or temporary. When testing a Data Export configuration, it's a good idea to ensure extracted data is actually present for document Batch Folders whose data you want to export.

When the Extract activity runs, it executes all extraction logic for the Data Model tied to a Batch Folder's classified Document Type. For each Batch Folder document, it creates "Index Data" and marries it to the Batch Folder via a JSON file called Grooper.DocumentData.json.

A couple of ways to verify its existence are as follows:

Option 1

  1. First, navigate to a document Batch Folder object in the node tree.
    • Not the Batch, not the root Batch Folder, not a Page object, but specifically a document Batch Folder object.
    • Here, we have the first Batch Folder in the root folder of our Batch selected.
    • This is where the extracted index data information lives.
  2. From there, click on the "Index Data" tab.
  3. After doing so you can see the extracted data displayed.


FYI Were the document classified (the Batch Folder assigned a Document Type), but not successfully extracted (the Extract activity not applied), the data structure would be present, but the fields empty.

At that point the Batch Folder would have the Document Type's Data Model associated to it, but would not have applied its extraction logic to collect values (That's what the Extract activity is for).


Option 2

Another means of verifying is to actually view the file created by the Extract activity and stored in the Grooper repository's file store location.

  1. Again, click on a document Batch Folder object in the node tree.
  2. Click the "Advanced" tab.
  3. Click the "Files" tab.
  4. In the List View panel you should see a file named Grooper.DocumentData.json.
    • This is the file the Extract activity generates when it processes a Batch Folder. It serializes all the extracted Data Elements' values for the document's Data Model.
  5. When you click on that file, you should see the stored JSON information of the indexed data displayed in the viewer below.


Back to top to continue to next tab

Configuring a Data Connection

In order for the Data Export to run, it first needs an established connection to a database and subsequent table(s).

Grooper can connect to an existing database and import references to its tables using a Data Connection object. Once connected to the database server, you can even have Grooper create a database AND create tables based on Data Model structures present in Grooper!

In the following tutorial we will cover how to:

  1. Create a new Data Connection object.
  2. Connect to a database server using the Data Connection.
  3. Create a new database from the Data Connection.
  4. Create a database table using the Data Models in our example Content Model.

It is worth noting that this article cannot tell you specifics about permissions in your own environment. The configuration for this article uses Microsoft SQL Server and has given the active Active Directory user full DB Admin privileges to the SQL environment.

Create a Data Connection

New Data Connections are added to the Global Resources folder of the Node Tree.

  1. Right click the Global Resources folder.
  2. Select "Add" then "Data Connection..."
  3. The "Add New Data Connection" window will appear.
  4. Give the new Data Connection a name.
    • We've named ours "DB Export"
  5. Press "OK" when finished.


  1. This will add the new Data Connection to the Global Resources folder.
  2. Next, we will configure connection settings so that Grooper can interoperate with our SQL server.


Configure Connection Settings

Regardless whether you want to connect to an existing database or create a new one from Grooper, your first step is always the same. You must first connect to a database server.

Grooper can connect to Microsoft SQL servers or any ODBC (Open Database Connectivity) compliant data source. For the purposes of this tutorial, we will connect to a SQL server.

  1. However, you can choose to connect to a SQL server or ODBC server using the Connection Settings property.
  2. Using the dropdown menu, choose either SQL Server or ODBC
    • Again, we will be connecting to a SQL server. So, SQL Server is selected.


Next, we need to define settings to access the database server. All you really need to do this is the server's name and access rights.

  1. Expand the Connection Settings property.
  2. Using the Server Name property, enter the SQL server name.
    • I was a little on the lazy side for this article and just connected to the SQL Express instance created for the Grooper repository database on my local machine. This is almost assuredly not what you want to do in a production environment, but it will work for testing purposes.
  3. You will also need access rights to the SQL server. Using this Data Connection object, Grooper is going to act as if it were a user, giving Grooper the capabilities to do things like add and drop tables, run queries, and more. Just like a user needs SQL access credentials, so does Grooper.
    • If the active Windows user has Active Directory rights to the SQL server, Windows will pass through your credentials. You can leave the User Name and Password blank in that case.
    • Otherwise, you'll need to enter a User Name and Password to access the database server here.
  4. Go ahead and hit "Save" at this point.


That's it! You're officially connected to the database server now. We can now connect to existing databases, import references to their tables (Keep this in the back of your mind. This will be important later.), create new databases, and new database tables.

  1. If you want to verify your connection, press the "Test Connection" button.
  2. The following message will appear if you are successfully connected to the SQL server.


At this point, you have two options:

  1. Connect to an existing database
  2. Create a new database

If you wanted to connect to an existing database, it's very easy.

  1. Select the Database Name property.
  2. Either type in the database's name or select it from the drop down menu.
    • FYI: Grooper has to ping the database in order to populate the dropdown list. Depending on the size of your SQL server, it's often quickest to just type in the database name.


However, now that we're connected to the database, you can also create a brand new database!

Create a New Database from the Data Connection

  1. To create a new database, press the "Create Database..." button.
  2. This will bring up the "Create Database" window.
  3. Using the Database Name property, name the database whatever you like.
    • We named ours "Export_Example_DB"
  4. Press the "Execute" button to create the database.


  1. You will see the newly created database's name populate the Database Name property.
  2. However, as a newly formed, infant database, it has no database tables. You can see, the "Database Tables" panel is empty. Next, we will add some database tables using the data structures of the Data Models in our example Content Model.


FYI If you connect to the server in Microsoft SQL Management Studio, you can verify the database is created.

See here, the "Export_Example_DB" is added to the list of databases. Through the Data Connection Grooper has a direct connection to this SQL environment to add and alter databases.


Create Database Tables from the Data Connection

We will end up creating three database tables by the end of this section:

  1. A table for the "Employee Report" Document Type's extracted Data Elements with its "one-to-many" related data elements flattened to a single table structure.
  2. A table for the "Personnel Info Report" Document Type's extracted "non-PII" Data Table.
  3. A table for the "Personnel Info Report" Document Type's extracted "PII" Data Table.

Table 1: Employee Report Data

  1. To create a new database table, press the "Create Table..." button.
  2. This will bring up the "Create Table" window.
  3. Database tables are created from Grooper using Data Elements from a Data Model. The Data Model's Data Fields and/or Data Columns will form the columns of the SQL table, housing extracted values from each document folder upon export. First, you must define the Content Type (e.g. a Content Model or one of its Document Types) whose Data Model you want to use to create the database table.
  4. The Content Type property's dropdown menu will present you a mini Node Tree view to select a Content Type from your Content Models folder.
    • In our case, we will select the "Employee Report" Document Type from our "Example Model - Data Export" Content Model.
    • Why didn't we choose the parent Content Model? It's all about Data Model hierarchy and scope. We want access to all the Data Elements in the "Employee Report" Document Type's Data Model. If we chose the parent Content Model, we would only have access to its Data Model's Data Elements, which would not include any of the Data Elements we want to export.
      • Data Elements are passed from parent Data Model to child Data Model not the other way around.


  1. Next, you will need to select the data scope in the Content Type's Data Model, using the Data Element Scope property.
    • This property will present you a drop down to select either the parent Data Model or sub-levels in its Data Element scope, branched by Data Section or Data Table elements.
    • Choosing a data scope is all about defining which Data Elements are accessible for database table creation (and ultimately data export to the created table).
  2. In our case, we want to choose the "Earnings" Data Table for our scope.
    • Think back to the notion of the one-to-many relationship. This database table can have a dynamic number of rows and the Data Columns are capable of capturing and reporting back unlimited instances of data, hence multiple rows. The Data Fields within this scope, however, are only capable of capturing and returning a single piece of data.
    • But, given the nature of hierarchy inheritance, at the "Earnings" Data Table scope, the database table that will be created will make columns not just for the Data Columns of the Data Table, but for each of the Data Fields contained within the Data Model's scope as well (i.e. The "Last Name", "First Name" and "Employee ID" Data Fields).
    • FYI: Were there Data Fields further up the inheritance tree, say at the base Data Model of the parent Content Model, the database table would also inherit those as well.
    • In other words, think about what Data Elements you want access to and go down the Data Model's hierarchy to widen the Data Element scope.


  1. The Table Name property will auto-populate a concatenation of the Content Type and Data Element Scope properties string values.
    • However, you can change this to whatever you want.
  2. Now, you're ready to create the database table. Press the "Execute" button.


  1. Here's where Grooper is doing the hard work for us. Notice in the "Review/Edit SQL Statement" window the SQL Statement required to create our table is already written for us.
  2. However, you can edit this statement, if need be.
    • For example, at the time of writing this article there was a bug involving the "Employee ID" Data Field. It's Value Type in Grooper is set to Int16. This was not properly converting to a "smallint" SQL data type for the corresponding SQL column. So we added smallint to the SQL Statement after [Employee ID].
  3. Click the "Run SQL Statement" button to create the table.


  1. Upon successfully creating the table, the "Database Tables" panel will have our newly created table listed.
    • It will initially display with a red dot on the icon. This will change to a green dot when we import this table's reference (more on that later).
  2. The "Table Columns" panel will display the data structure of the table, listing the SQL table's columns, their data types and other information.
  3. The "Data Preview" panel will display data within the table.
    • Because this table was just created, it will not have any data to display. We haven't exported any data to it yet!


Table 2: Personnel Info Report non-PII Data

For the other two tables, it's mostly a repeat of the same steps, just taking care to select the appropriate Content Type and Data Element scope.

  1. To create the second database table, press the "Create Table..." button.
  2. Select the appropriate Content Type, using the Content Type property.
    • For this database table, we need access to the "Personnel Info Report" Document Type's Data Model. We have, hence, selected the "Personnel Info Report" Document Type of our example Content Model.
  3. Select the appropriate data scope, using the Data Element Scope property.
  4. This table is going to house all the non-PII related data. So, we have selected the "non-PII" Data Table element.
    • This will create a database table using only its Data Columns. The "PII" Data Table's columns will be out of scope.


  1. Press "Execute".
  2. Then, press "Run SQL Statement".


  1. Success! The table is now successfully created.


Table 3: Personnel Info Report PII Data

This database table can be created with the exact same steps as described above with just one key difference:

  1. The Data Element Scope will be set to the "PII" Data Table not the "non-PII" Data Table.


2.  We now have our third database table added to the SQL database.
  • We just have one last thing to do before we export data to these newly created database tables: import their references.


Import Table References

Before we can export data to these newly created tables, we must import their table references. This part is critical in order for Grooper to interact with a database table, whether to export data using Data Export or perform a Database Lookup operation. Importing the table references will give Grooper an object it can reference when mapping data between Grooper and the database table, ultimately allowing for data to flow from extracted Batch Folders to the database table.

Importing a table reference is as simple as a click of a button.

  1. Select the database table you wish to import from the Data Connection's "Database Tables" panel.
    • We're starting with the "Employee_Report_Earnings" table.
  2. Press the "Import Table Reference" button.
  3. You will see the following message upon successful import.


  1. This will create a Database Table object, as a child of the Data Connection.
    • We will utilize this object later when we get to configuring our export.
  2. Also, when the database table is imported the red circle on its icon will change to green.


  1. We will also need to import the remaining two table references.
  2. Next, we will use these three table references to export document data over our Data Connection to these SQL database tables.


Back to top to continue to next tab

Configuring Export Behaviors for Data Export

Data Export is one of the Export Type options when configuring an Export Behavior. Export Behaviors control what document content for a Batch Folder is exported where, according to its classified Document Type. As such, in order to configure a Data Export, you must first configure an Export Behavior for a Content Type (a Content Model or its child Content Categories or Document Types).

In our case, we want to perform two different kinds of export, depending on the document Batch Folder's classified Document Type.

  • For the "Employee Report" Document Type, we want to export its collected Data Elements to our first database table.
  • For the "Personnel Info Report" Document Type, we want to export its collected Data Elements (which are collected using an entirely different Data Model and have an entirely different data structure) to our second and third database table.

The basic idea behind Export Behaviors is, based on kind of document you're looking at, you can tell Grooper how you want to export it.

Export Behaviors can be configured in one of two ways:

  1. Using the Behaviors property of a Content Type object
    • A Content Model
    • A Content Category
    • Or, a Document Type
  2. As part of the Export activity's property configuration

When the Export activity processes each Batch Folder it will execute the Export Behaviors, according to their configuration settings.

FYI In general, users will choose to configure Export Behaviors either on the Content Type object it applies to or local to the Export activity step in a Batch Process.

This may just boil down to personal preference. There is no functional difference between an Export Behavior configured on a Content Type or an Export Behavior configured on an Export Step, upon completing their configuration. In either case, they will accomplish the same goal.

However, it is possible to configure Export Behaviors, in both locations. If you do this, you will need to understand the Export activity's Shared Behavior Mode property options. This will effect if and how two Export Behaviors configured for the same Content Type will execute. Please visit the Export article for more information.

Add an Export Behavior

Option 1: Content Type Export Behaviors

An Export Behavior configuration can be added to any Content Type object (i.e. Content Models, Content Categories, and Document Types) using its Behaviors property. Doing so will control how a Document Type "behaves" upon export.

  1. Select the Content Type whose Export Behavior' you want to configure in the Node Tree.
    • We will start by configuring an Export Behavior for the "Employee Report" Document Type.
  2. To add an Export Behavior, first select the Behaviors property.
  3. Then,m press the ellipsis button at the end of the property.


  1. This will bring up the Behaviors collection editor window.
  2. Press the "Add" button.
  3. Select Export Behavior.
    • FYI: Children Content Type objects will inherit export settings from their parent Content Type's Export Behavior configuration
    • Also, you can only configure one Export Behavior per Content Type object. However, you can configure an Export Behavior for any Content Type in a Content Model. Functionally, this is how you add multiple Export Behaviors for a single Content Model.
      • For example, our two Document Types need different Export Behavior configurations. We would 'not want to configure their parent Content Model's Export Behavior. That would apply that single export configuration to all Document Types. That's not going to work for us. The "Employee Report" documents' data need to go to one location and the "Personnel Info Report" documents' data need to go somewhere entirely different. Instead, we will end up configuring the Behaviors property of both Document Types individually. Thus, we end up with two Export Behavior configurations for the Content Model.


  1. You will see the Export Behavior added to the Behaviors list.
  2. Selecting it, you can now add one or more Export Definitions with the Export Definitions property.
    • Next, we will add a Data Export definition.


Option 2: Export Activity Export Behaviors

Export Behaviors can also be configured as part of the Export activity's configuration. These are called "local" Export Behaviors. They are local to the Export activity step in the Batch Process.

  1. For example, here we have a working Batch Process selected in the Node Tree.
    • This is a simple Batch Process that could have been used to process these documents, recognizing their text, classifying the document Batch Folders, and extracting data from them. The last step in this Batch Process is an Export step.
  2. Select the Export step of the Batch Process.
  3. To add an Export Behavior, select the Export Behaviors property.
  4. Then, press the ellipsis button at the end of the property.


  1. This will bring up the Export Behaviors collection editor window.
  2. Press the "Add" button to add a new Export Behavior
  3. An Export Behavior will be added to the list.
  4. With the Export Behavior selected you must define which Content Type the behavior applies to using the Content Type property.
    • Note in both cases, a Content Type is involved in configuring Export Behaviors.
    • Whether local to the Export activity or as part of a Content Model's configuration, Grooper needs to know what to do upon export, given a certain Content Type. Once Grooper knows what kind of document it's looking at, we can then inform it what to do in terms of exporting its document content.
  5. Using the dropdown menu, select which Content Type scope should utilize the Export Behavior by selecting either a top-level parent Content Model or one of its child Content Categories or Document Types.
    • Keep in mind you can only select a single Content Type here. You can only configure one Export Behavior per Content Type object.
    • Again, children Content Type objects will inherit export settings from their parent Content Type's Export Behavior configuration. However, multiple Export Behaviors may be added locally to the Export activity.
      • For example, since both our Document Type needed a unique Export Behavior configuration, we would add one Export Behavior to the list for each one.


  1. Here, we have selected the "Employee Report" Document Type. This Export Behavior would then only apply to Batch Folders of this Document Type.
  2. Once a Content Type is selected, you can add one more more Export Definitions with the Export Definitions property.
    • We will discuss adding a Data Export definition next.


Add an Export Definition

  1. We will choose to configure the Export Behavior using "Option 1", adding it to the "Employee Report" Document Type
  2. We will add the 'Export Behavior to its set of Behaviors properties.


Regardless if you choose to configure the Export Behavior on a Content Type object, or if you configure it local to to Export activity's configuration, your next step is adding an Export Definition.

  1. Once you've added an Export Behavior, select the Export Definitions property.
  2. To add an Export Definition, press the ellipsis button at the end of the property.


  1. This will bring up an Export Definition list editor to add one or more Export Types.
    • Next, we will add a Data Export definition to the list.


Add a Data Export

Export Definitions functionally determine three things:

  1. Location - Where the document content ends up upon export. In other words, the storage platform you're exporting to.
  2. Content - What document content is exported: image content, full text content, and/or extracted data content.
  3. Format - What format the exported content takes, such as a PDF file or XML data file.

Export Definitions do this by adding one or more Export Type configurations to the definition list. The Export Type you choose determines how you want to export content to which platform. In our case, we want to use a Data Connection to export extracted document data ("Content") to a database table ("Location" and "Format"). We will add a Data Export to the definition list.

  1. To do this, press the "Add" button.
  2. Choose Data Export from the list.


  1. This will add an unconfigured Data Export to the Export Definitions list.
  2. For all Data Export configurations, the first step is configuring the Connection property.
  3. Use the dropdown menu to select a Data Connection from the node tree.
    • This will provide Grooper with the information required to connect to the external database upon export.


  1. With the Data Connection established, the next step is to map data from Grooper to a table in the database.
  2. Next, we will review using the Table Mappings property to map Data Elements in a Data Model to corresponding column locations in a database table.


Table Mappings Example 1: Flattening a Data Model

We will continue configuring our "Employee Report" Document Type's Data Export first. For this document, we have to deal with a "one-to-many" relationship between data that exists once on the document, and the dynamic data in the multiple rows in the "Earnings" table.

For this example, we will flatten the Data Model's element hierarchy. By flattening the Data Model, we can output all the document's data to a single database, despite the fact some of its data is present once on the document (the single instance Data Field' values) and some is more dynamic (mutli-instance Data Column values from our Data Table).

We will essentially re-output the single instance values, marrying them to every row instance output for the table on the document. Think of the single instance values as "document level". These single values ("Last Name", "First Name", "Employee ID") pertain to the whole document. Therefore, they also pertain to every row of information in the "Earnings" table.


We accomplish this feat through Table Mappings. This will allow us to hook up the right Data Elements in a Data Model to the right database table columns upon exporting collected values. Much like data hierarchy scope was important when we created this database, so will it be when we configure Table Mappings for export.

  1. With the Data Export selected, select the Table Mappings property.
  2. Press the ellipsis button at the end.


  1. This will bring up the "Table Mappings" collection editor.
  2. Press the "Add" button to add a new set of mappings.
  3. Use the Source Scope property to select the appropriate Data Element scope in the Content Type's Data Model.
  4. In our case, we're choosing the "Earnings" Data Table.
    • Selecting this Data Table as the scope will not only allow mapping to its Data Columns, but all Data Fields up its logical hierarchical tree path (i.e. "Last Name", "First Name", and "Employee ID").


Next we need to define which database table we're mapping to. There are now three databases in the database connected to our Data Connection. Each of their table references have been imported to Grooper. We need to pick which one we want to use.

  1. Choose which database table you want to map using the the Target Table property.
  2. Select a database table using the dropdown menu.
    • In our case the "Employee_Report_Earnings" database table.
    • FYI: Without importing the database table's reference, creating the child Database Table object for our Data Connection, you would not see the database table listed here.


Now that we have a source data scope selected (where we're mapping from) and a target database table selected (where we're mapping to), we will map the Data Elements in our Data Model's scope to columns in the targeted database table.

  1. Select the Column Mappings property.
  2. Press the ellipsis button at the end.


  1. This will bring up the "Column Map" editor.
  2. Each database table column will be listed as a property in the grid.
  3. Using the dropdown menu, you can select a corresponding Data Element from the Data Model's scope.
    • FYI: Since we selected the "Earnings" Data Table as our scope, its direct children are listed as their simple Data Element names. For example, "MTD" for the "MTD" Data Column.
      • For Data Elements inherited from a parent scope, they will be listed as "Scope Source"."Data Element". For example, the "Last Name" Data Field was inherited from the "Employee Report" Document Type's base Data Model. So, it is listed as "Employee_Report.Last_Name".
    • For the Last Name database column, we have selected the Employee_Report.Last_Name Data Element.


There is also a mapping shortcut to automatically assing mappings if a Data Element's name matches a database column's name.

  1. Right-click any property in the grid.
  2. Select "Auto-Map..."


  1. Database table columns will be automatically mapped to Data Elements as long as their names match.
    • Furthermore, since we created this database table from Grooper, using the same Data Model, all of these column names and Data Element names will match.
  2. Press "OK" to finish configuring mappings.


We now have everything we need to export data from "Employee Report" documents to a database, using Data Export.

  1. Press "OK" on this and all subsequent windows to finalize the Export Behavior configuration.
  2. Don't forget to save the Content Type when you're done.

With this Export Behavior in place, whenever the Export activity processes an "Employee Report" Batch Folder, data will be exported to the database as we've designed through these settings.

Before testing the Export step, in the next tab, we will configure another Data Export for the "Personnel Info Report" Document Type with its own set of database table mappings to export their extracted data to two different database tables.


Table Mappings Example 2: Exporting to Multiple Database Tables

Despite the fact you can only set up one Export Behavior per Content Type, you can add as many Export Definitions as you need.

In this example, we will take the extracted data from our "Personnel Info Report" Document Type and export PII information to one database table and non-PII information to a different table, using two Data Export definitions.

  1. We want to export data for the "Personnel Info Report" documents. We will configure the Export Behavior' using the "Personnel Info Report" Document Type.
  2. We've already added the Export Behavior to the Behaviors collection list.
  3. We've added a single Data Export definition.
  4. We've already pointed the Connection property to our "DB Export" Connection Type.
  5. Next, we will assign the Table Mappings.


  1. Using the "Add" button, we've added a Table Mapping for the first of the two Data Table elements used to collect information from the report..
  2. For the Scource Scope, we've selected the "non-PII" Data Table.
  3. We want to all the data Grooper collects for that Data Table to the "Personnel_Info_Report_non_PII" database table. We have selected that database table for the Target Table.
  4. All that's left is to assign our Column Mappings.


  1. Next, we mapped our source Data Elements (The Data Columns in the "non-PII" Data Table in this case) to our target database table columns.
    • Since the names all matched, we just used the "Auto Map" feature.
  2. Press "OK" to finish.


Now we just need to set up mappings for the "PII" Data Table. In this case, we're exporting to the same database, just two different database tables within it. Since we're already collecting mappings for the database connected to using the "DB Export" Data Connection, all we need to do as add another set of Table Mappings.

  1. Press the "Add" button to add a new set of mappings.
  2. For the second set of mappings, we've selected the "PII" Data Table for the Source Scope.
  3. Using this mapping, will push the extracted PII information to the "Personnel_Info_Report_PII" database table, which we have selected for the Target Table.
  4. Last but not least, we've mapped our source Data Elements to their target database table columns using the Column Mappings property.

FYI Here, we added two sets of table mappings to two database tables in the same database. What if you wanted to add two sets of mappings to database tables in different databases?

That's easily doable. You would need two Data Connections, one to connect Grooper to each individual database. Then, you would need to add two Data Export definitions, each one using one of the two Data Connections. From there, you'd add the individual table mappings to each individual Data Export.


We now have everything we need to export data from "Personnel Info Report" documents to two tables in a database, using Data Export.

  1. Press "OK" on this and all subsequent windows to finalize the Export Behavior configuration.
  2. Don't forget to save the Content Type when you're done.

Next, we will apply the Export activity to the document Batch Folders in our sample Batch, which will export data according to their Document Type's Export Behavior using the Data Export definitions.


Back to top to continue to next tab

Applying the Export Activity and Reviewing the Results

Process the Export Step

With the two Export Behaviors configured, we can now test our export. Export Behaviors are executed by the Export activity.

  1. We will use this example Batch Process to demonstrate our Data Export configurations detailed in the previous tutorial.
  2. The Export activity in our Batch Process will apply our Export Behavior to every Batch Folder in the Batch.
  3. FYI: Because we configured the Export Behavior on Content Type objects (using the two Document Types' Behaviors property editor), we do not have to configure the Export activity's local properties.
    • We've given Grooper all the information it needs to export content. The Export activity will go through each Batch Folder in the Batch. It will see the Batch Folders are classified with one of the Document Types in our Content Model and use their Export Behavior configuration settings to export document content.


We will test our export using the Export activity's "Unattended Activity Tester" tab.

  1. Expand the Batch Process to reveal its child Batch Step nodes.
  2. Select the Export activity step.
  3. Switch to the "Unattended Activity Tester" tab.
  4. Press the "Process All..." button.
    • On the subsequent screen press the "Start" button to start processing the Batch. This will apply the Export activity, as configured in the Batch Process to all items in the activity's scope (Folder Level 1 in our case)


Review the Export

You can easily review our exported data in one of two ways:

  • In the "Data Preview" panel of any of the Database Table reference objects.
  • Connecting the database table in Microsoft SQL Server Management Studio.
  1. Here, we have the "Employee_Report_Earnings" Database Table reference object selected.
  2. You can see in the "Data Preview" window, now there's data there!
    • This is information Grooper collected and exported, using the Data Export table mappings we configured for the "Employee Report" Document Type's Export Behavior.


Using SQL Server Management Studio, we can also verify the data was exported.

  1. This is the exact same database table we have selected.
  2. Running a SELECT statement, we can see the exact same data we saw from Grooper is present in the database.