2.90:Database Export

From Grooper Wiki

Database Export is one of the main ways to Deliver data Collected in Grooper.

A completed Content Model and accompanying Batch for what will be built can be found by following this link and downloading the provided file. It is not required to download to understand this article, but can be helpful because it can be used to follow along with the content of this article. This file was exported from and meant for use in Grooper 2.9

About

The most important goal of Grooper is to deliver accurate data to line of business systems that allow the information to be integrated into impactful business decisioning. Tables in databases remain, to this day, one of the main vessels by which this information is stored. Grooper's Database Export activity is the mechanism by which this delivery is performed. Database Export uses a configured Data Connection to establish a link to (Microsoft SQL Server or ODBC-compliant) tables in a database and intelligently populate said tables.

Two key distinctions about Grooper's Database Export activity are its ability to take full advantage of its sophisticated hierarchical data modeling to flatten complex/inherited data structures, and the ease of delivery to multiple tables at once.

How To

Following is an example of how to setup the Database Export activity. In this example are a couple of different document formats whose data will be collected by a single Content Model and used to help illustrate the two key distinctions mentioned above. The first document shows the flattening of inherited data, while the second is used to give an avenue to export to multiple tables at once.

! Some of the tabs in this tutorial are longer than the others. Please scroll to the bottom of each step's tab before going to the step.

Understanding the Forms

The thing to understand about the document displayed on the right is that there is a combination of single instance data (single fields of information) married with multi-instance data (table displaying a dynamic amount of rows), i.e. - a one-to-many relationship. The single instance data (Last Name, First Name, Employee ID), as a result of only being listed once on the document, will only be collected once, but needs to be married to each row of information from the table. While the single instance data is only collected once, it will be reported many times. This document is meant to show flattening inherited data structures.


The second document is collecting only information from the main table, but there is Personally Identifiable Information (PII) in the table that should be protected. As a result, it will export its information to two tables (with the assumption that the second table is "protected.) This document is meant to demonstrate how to export to multiple tables via one step in a Batch Process.

Understanding the Content Model

The Content Model extracting the data for these documents is fairly straight forward. There are two Document Types, each with their own Data Model.

The first Data Model is the one representing the one-to-many relationship. Notice for the fields represented once in the document there are Data Fields. For the tabular data, a Data Table was established.

The second Data Model is using one table extractor to collect all the data, but reporting it to two different tables.

Index Data

Before the Database Export activity can send data, it must have data. It's easy to get in the habit of testing extraction on a Data Field or a Data Model and feel good about the results, but it must be understood that the information displayed when doing so is in memory, or temporary. When the Extract activity is successfully run against a document it creates Index Data and marries it to the document via a JSON file called Grooper.DocumentData.json. A couple of ways to verify its existence are as follows.

  1. First, navigate to the Document object in the node tree; not the Batch, not the Batch Folder, not the Page, but specifically the Document object. This is where the information lives.
  2. From there, click on the Index Data tab.
  3. After doing so you can see data displayed. Were the document classified, but not successfully extracted, the data structure would be present, but the fields empty.


  1. Another means of verifying is to again, click on the Document object in the node tree.
  2. Click the Advanced tab.
  3. Click the Files tab.
  4. In the List View you should see the Grooper.DocumentData.json file.
  5. When you click on that file, you should see the stored JSON information of the indexed data displayed in the viewer below.

Configuring a Data Connection

In order for the Database Export activity to run, it needs an established connection to a database and subsequent table(s). Grooper can connect to an existing database and import references to its tables, or easily enough, you can have Grooper create the database FOR you AND create tables based on data structures present in Grooper!

The following setup will have Grooper create the database and its tables.

It is worth noting that this article cannot tell you specifics about permissions in your environment. The configuration for this article is using Microsoft SQL Server, not ODBC, and has given the active Active Directory user full DB Admin privileges to the SQL environment.

  1. Expand the node tree to the Infrastructure section and in the Data Connections section Add > Data Connection...
  2. In the Add New Data Connection window give the object a name and click OK
  3. As mentioned above, the configuration settings listed here will be specific to this environment:
    • Set the Provider property is to SqlServer
    • The Server Name property should be the SQL instance. In this case, the hostname of the computer is enough because the SQL environment was established with a default SQL instance, therefore no named instance. Were there a named SQL instance it would something like OKCSPRKI01\MSSQLSERVER
    • Because Grooper will create the database, the Database Name property will be left blank for now.
    • The User Name and Password properties are left blank here, which passes the current logged in AD user's credentials to SQL.


With a connection to the SQL environment established, we can make Grooper create our database for us.

  1. Click the Create Database button
  2. In the Create Database window, give the database a name, and click the OK button.
  3. The Connection Settings and Database Name properties will be updated to reflect the new database information entered.


This Data Connection is not complete until a table references has been imported (this particular Data Connection will end up with three table references). A database has been created and is connected to, and Grooper did the work for us there, so let's keep that up to get tables made. Three different tables need to be created and references imported, due to the two different Document Types and the subsequent Data Models (the second Document Type containing a Data Model with two Data Tables). The following process will describe importing the first table, and will simply need to be repeated for the second and third tables (of which will not consist of the same inheritance notions discussed on step 5 because the scope considered for the Database Export 02 Document Type has Data Tables and no Data Fields.)

1. Click the Create Table button.
2. Click the Content Type property and click the drop-down arrow to get a mini node tree view.
3. Select the first Content Type. In the case of this exmaple, it will be the Document Type named Database Export 01.

4. Becaues the Database Export 01 Content Type was selected, the following Data Element Scope property will have a drop down of which will contain Data Elements relative to this particular Content Type.
5. Selecting the Earnings Data Table will set the baseline structure of the table to be created, and it will inherit upward.

  • Think back to the notion of the one-to-many relationship. This table can have a dynamic number of rows and the Data Columns are capable of capturing and reporting back unlimited instances of data, hence multiple rows. The Data Fields within this scope, however, are only capable of capturing and returning a single piece of data. But, given the nature of inheretence, the table that will be created will make columns not just for the Data Columns of the Data Table, but for each of the Data Fields contained within this scope.
  • Were there Data Fields further up the inheretence tree, say at the base Data Model of the main Content Model the table would also attempt to inheret those as well.

6. The Table Name property is arbitrary, but by default is a concatenation of the strings passed by the Content Type and Data Element Scope properties.
7. Click the Execute button to close this window and continue the process forward...

8. Here's where Grooper is doing the hard work for us. Notice in the Review/Edit SQL Statement window the SQL Statement required to create our table is already written for us.
9. Click the Run SQL Statement button to create the table.

10. With the table made, the Database Tables List View will have an entry in it.

  • It will initially display with a red dot, instead of green dot displayed in the screen shot.

11. The Table Columns List View will display the data structure of the table, while the Data Preview List View will display data within the table.

  • Because this table was just created, it will not have any data to display.

12. Clicking the Import Table Reference button will ...

  • Be warned, the Drop Table button is a powerful one in that it will delete the entire table from SQL. Use this button with EXTREME caution.

13. ...create a Database Table object in the node tree.