Grooper and AI

From Grooper Wiki

Grooper's groundbreaking AI-based integrations make getting good data from your documents a reality with less setup than ever before.

Artificial intelligence and machine learning technology has always been a core part of the Grooper platform. It all started with ESP Auto Separation's capability to separate loose pages into discrete documents using trained examples. As Grooper has progressed, we've integrated with other AI technologies to improve our product.

Grooper not only offers Azure's machine-learning based OCR engine as an option, but we improve its results with supplementary data from traditional OCR engines. This gives you the "best of both worlds": Highly accurate character recognition and position data for both machine printed and hand written text with minimal setup.

In version 2024, Grooper added a slew of features incorporating cutting edge AI technologies into the platform. These AI based features accelerate Grooper's design and development. The end result is easier deployments, results with less setup and, for the first time in Grooper, a document search and retrieval mechanism.

In version 2025, we introduced AI Assistants, a conversational AI that is knowledgeable about your documents and more. With AI Assistants, users can ask a chatbot persona questions about resources it is connected to, including documents, databases and web services.


Grooper's AI integrations and features include:

  • Azure OCR - A machine learning based OCR engine offered by Microsoft Azure.
  • Large language model (LLM) based data extraction at scale
  • A robust and speedy document search and retrieval mechanism
  • Document assistants users can chat with about their documents


In this article, you will find:

Relevant AI constructs and article links

Here you will find links to all currently available articles pertaining to Grooper AI functionality.

Azure OCR

Azure OCR – An OCR Engine option for OCR Profiles that utilizes Microsoft Azure's Read API. Azure Read is an AI-based text recognition engine that uses a convolutional neural network (CNN) to recognize text. Compared to traditional OCR engines, it yields superior results, especially for handwritten text and poor-quality images. Grooper supplements Azure's results with those from a traditional OCR engine in areas where traditional OCR performs better.
Azure DI OCR – This OCR Engine utilizes Azure Document Intelligence for OCR. All models provided by this service use the Azure Read engine for their baseline OCR. Specialized models perform document analysis including text extraction, layout data collection, and more.
  • Grooper’s current integration focuses on two models:
    • prebuilt-read – Version 4.0 of the Read engine, offering superior machine-printed and handwritten text recognition across nearly any document image.
    • prebuilt-layout – Collects additional layout data Grooper can use downstream, including table lines, checkbox data, and barcode data.
  • Note: Azure DI OCR requires the "Azure Document Intelligence" Repository Option added to a Grooper Repository. See below for more information on Azure Docuemnt Intelligence
  • The Azure DI OCR engine will eventually replace the Azure OCR engine in Grooper. Azure OCR uses an older API, with Read v3.2 being the most recent supported model. Microsoft is no longer updating this API and it will be fully retired by September 25, 2028.

LLM connectivity and constructs

LLM Connectivity

LLM Connector – A "Repository Option" configured on the Grooper Root node. This option provides connectivity to large language models (LLMs) offered by OpenAI and Microsoft Azure. There are currently two connection options:
  • OpenAI – Connects Grooper to LLMs offered through the OpenAI API or compatible APIs.
    • Compatible APIs must support chat/completions and embeddings endpoints similar to the OpenAI API to interoperate with Grooper’s LLM features.
  • Azure – Connects Grooper to LLMs offered by Microsoft Azure through its Model Catalog, including Azure OpenAI models.

LLM-enabled extraction capabilities

Ask AI – An LLM-based Value Extractor specialized for natural-language responses. This extractor sends the document’s text and a natural-language prompt to a chat completion service. The chatbot’s response becomes the extractor’s result.
AI Schema Extractor – An LLM-based Value Extractor specialized for structured JSON responses. The AI Schema Extractor enables advanced, schema-driven extraction from unstructured or semi-structured documents, supporting scenarios such as tables, line items, and multi-field records.
AI Extract – An LLM-based Fill Method for large-scale data extraction. Fill Methods are configured on Data Models and Data Sections. They perform a secondary extraction after child Data Elements extractors and extract methods execute, or they may act as the primary extraction mechanism when no other extractors are configured. AI Extract uses an LLM chat completion service to populate Data Models, often requiring only that Data Elements be defined.
LLM-enabled Data Section Extract Methods
AI Collection Reader – An LLM-based Section Extract Method for multi-instance section extraction. AI Collection Reader extends AI Section Reader for repeating records and is optimized for large, multi-page documents that must be processed in chunks to avoid exceeding LLM context limits.
AI Section Reader – An LLM-based Section Extract Method for single-instance section extraction. It enables advanced extraction from complex, variable, or ambiguous document layouts using generative AI.
AI Transaction Detection – An LLM-based Section Extract Method specialized for transaction-based documents such as payroll reports or EOBs. It automatically segments documents into individual transactions using detected anchors and can extract structured data from each transaction.
Clause Detection – Designed to locate specified clause types in natural-language documents. Users provide one or more sample clauses, and an embeddings model identifies the most similar document chunks. Detected sections can then use AI Extract to extract structured data from the clause text.
AI Table Reader – An LLM-based Data Table Extract Method that enables extraction of tabular data from semi-structured or unstructured documents, even when table layouts are ambiguous or inconsistent.

LLM-enabled separation and classification capabilities

AI Separate – An LLM-based Separation Provider that evaluates page meaning and structure rather than relying on fixed rules, barcodes, or control sheets.
LLM Classifier – An LLM-based Classify Method that sends document content and candidate Document Types (with descriptions) to an LLM, which selects the best match.
Mark Attachments – Assists document separation by attaching documents to parent documents. When the Generative AI option is enabled, an LLM determines whether a document should be attached to the preceding or following document.

Other LLM-enabled capabilities

AI Generator – Used on the Search page to generate text-based outputs (TXT, CSV, HTML, etc.) such as reports, summaries, or contact lists from search results.
AI Productivity Helpers – A collection of LLM-powered tools that assist Grooper Designers with tasks such as writing regular expressions, building Data Models, and composing AI Search queries.

VLM capabilities (experimental)

We are experimenting with Vision-Language Model (VLM) integration in Grooper. The following activities are available in current builds for experimentation but are not yet considered production-ready:

VLM Analyze – Analyzes pages or folders using a VLM and saves the response as structured JSON for downstream extraction. This JSON can be accessed by LLM-based constructs using the "JSON File" Quoting Method.
VLM OCR – Uses VLM models to recognize text from images, built on the VLM Analyze activity.

LLM-related properties and concepts

Parameters – Properties that adjust how an LLM generates responses (for example, Temperature controls randomness).
Quoting Method – Determines what content is provided to an LLM, such as full document text, partial text, layout data, extracted values, or combinations thereof.
Alignment – Controls how LLM-based results are highlighted and aligned in the Document Viewer.
Prompt Engineering – The practice of designing and refining prompts to obtain desired responses from an LLM.

AI Search and the Search page

AI Search – Enables document indexing and searching within Grooper. Once an index is created and documents are added, users can search document content and extracted data from the Search page.

Simple and advanced query mechanisms provide flexibility in search criteria. The Search page can be used for research or to execute document processing commands.

The AI Search article covers:

AI Assistants and the Chat page

AI Assistant – Enables the Chat page, allowing users to interact conversationally with documents and data. AI Assistants can access AI Search indexes, databases connected via Grooper Data Connections, and web services defined using RAML.

AI Assistants use retrieval-augmented generation (RAG) to build a retrieval plan that determines what content is relevant to a conversation, enabling natural-language interaction with enterprise documents and data.

Azure Document Intelligence integration

Azure Document Intelligence is a Microsoft cloud service that enables OCR and document analysis. Grooper’s integration allows organizations to leverage Azure’s advanced machine learning models for text extraction, layout analysis, and semantic understanding across both machine-printed and handwritten documents.

Grooper connects to Azure Document Intelligence by configuring the Azure Document Intelligence Repository Option on the database Root node using an API key and resource name.

With the Azure Document Intelligence option configured, Grooper leverages the service in two primary ways:

  • Azure DI OCR – Used by the Recognize activity for text extraction and layout data collection.
  • DI Analyze – Used for comprehensive document analysis leveraged by AI-enabled features such as AI Extract.
    • This analysis produces JSON data files that are used by the DI Layout quoting method when configuring AI-enabled features.

OpenAI and Azure OpenAI rate limits

OpenAI rate limits

OpenAI's full rate limit documentation

Rate limits restrict how many requests or tokens you can send to the OpenAI API in a specific time frame. Limits are set at the organization and project level (not per individual user) and vary by model type. Rate limits are measured in several ways:

  • RPM - Requests per minute (the number of calls to the "chat/completions" or "embeddings" endpoints per minute)
  • RPD - Requests per day (the number of calls to the "chat/completions" or "embeddings" endpoints per day)
  • TPM - Tokens per minute (effectively how much text is sent and received per minute)
  • TPD - Tokens per data (effectively how much text is sent and received per day)
  • Usage limit (OpenAI limits the total amount an organization can spend on the API per month)

OpenAI places users in "usage tiers" based on how much they spend monthly with OpenAI and how old their account is. As usage tiers increase, so does the rate limit thresholds for each model.

Tier Qualification Usage limits
Free User must be in an allowed geography $100 / month
Tier 1 $5 paid $100 / month
Tier 2 $50 paid and 7+ days since first successful payment $500 / month
Tier 3 $100 paid and 7+ days since first successful payment $1,000 / month
Tier 4 $250 paid and 14+ days since first successful payment $5,000 / month
Tier 5 $1,000 paid and 30+ days since first successful payment $200,000 / month

Users will automatically move up in usage tiers the more they spend with OpenAI. Users may also contact OpenAI support to request an increase in usage tier.

You can view your current usage tier and rate limits in the Limits page of the OpenAI Platform.

  • For Grooper users just getting started connecting to the OpenAI API, this can cause large volumes of work to time out. Each time Grooper executes one of its LLM-based features (such as an AI Extract operation), it sends a request to OpenAI (these count toward your requests per minute/day), handing it document text and system messages (these count toward your tokens per minute/day).

Azure OpenAI rate limits

Azure AI Foundry full rate limit documentation.

Azure OpenAI models are deployed with Azure AI Foundry. This allows users to deploy OpenAI models outside of OpenAI's infrastructure as a service you manage in your Azure environment. They are still rate limited. However, in general, users are able to achieve greater throughput deploying Azure OpenAI models than using the OpenAI API. This is especially the case for new OpenAI accounts.

Quotas and limits aren't enforced at the tenant level. Instead, the highest level of quota restrictions is scoped at the Azure subscription level. Rate limits are similar to OpenAI API's rate limts:

  • RPM - Requests per minute (the number of calls to the "chat/completions" or "embeddings" endpoints per minute)
  • TPM - Tokens per minute (effectively how much text is sent and received per minute)


Several different factors affect your rate limits.

  • Global vs Data Zone deployments
    • Global deployments have larger rate limits and are better suited for high-volume workloads.
    • Example: Global Standard gpt-4.1 (Default) = 1M tokens per minute & 1K requests per minute
    • Example: Data Zone Standard gpt-4.1 (Default) = 300K tokens per minute & 300 requests per minute
  • Subscription tier
    • Enterprise and Microsoft Customer Agreement - Enterprise (MCA-E) vs Default
    • Enterprise and MCA-E agreements have larger rate limits than the default "pay-as-you-go" style agreement. The Default tier is best suited for testing and small teams, with MCA-E better suited for mid-to-large organizations and Enterprise is best for large organizations, especially those in regulated industries.
    • Example: Global Standard gpt-4.1 (Enterprise and MCA-E) = 5M tokens per minute & 5K requests per minute
    • Example: Global Standard gpt-4.1 (Default) = 1M tokens per minute & 1K requests per minute
  • Region
    • Rate limits are defined per region (e.g. South Central US, East US, West Europe, etc.)
    • However, you are not limited to a single global quota. You can deploy models in multiple regions to effectively increase your throughput (as long as your subscription supports it).
  • Model
    • Similar to OpenAI, rate limits differ depending on the model you use.
    • Example: Global Standard gpt-4.1 (Enterprise and MCA-E) = 5M tokens per minute & 5K requests per minute
    • Example: Global Standard gpt-4o (Enterprise and MCA-E) = 30M tokens per minute & 180K requests per minute

Does OpenAI or Azure AI Foundry save my data or use it for training?

For OpenAI models
Grooper integrates with OpenAI API not Chat GPT. When using the OpenAI API, your data (prompts, completions, embeddings, and fine-tuning data) is not used for training to improve OpenAI models (unless you explicitly opt in to share data with OpenAI). Your data is not available to other customers or other third parties.
All data passed to and from OpenAI (prompts, completions, embeddings, and fine-tuning data) is encrypted in transit.
Data is saved in the case of fine-tuning data for your own custom models. Fine-tuned models are available to you and no one else (without your consent). All stored fine-tuning data may be deleted at your discretion. All stored data is encrypted at rest. The OpenAI API may store logs for up to 30 days for abuse monitoring. However, they offer a "zero data retention" option for trusted customers with sensitive applications. You will need to contact the OpenAI sales team for more information on obtaining a zero data retention policy.
For Azure AI Foundry Models (including Azure OpenAI models)
Azure AI models are deployed in Azure resources under your control in your tenant. Models are deployed in Azure and operate as a service under your control. Your data (prompts, completions, embeddings, and fine-tuning data) is not available to other customers, OpenAI, or other third parties. Your data is not used for training to improve models by Microsoft, OpenAI or any other third parties with out your permission or instruction.
All data passed to and from the model service (prompts, completions, embeddings, and fine-tuning data) is encrypted in transit.
Some data is saved in certain cases, such as data saved for fine-tuning your own custom models. All stored data is encrypted at rest. All data may be deleted at your discretion. Azure will not store prompts and completions without enabling features that do so. Azure OpenAI may store logs for up to 30 days for abuse monitoring purposes, but this can be disabled for approved applications.

More information on specific models

Model retirements

OpenAI

OpenAI has announced the retirement of several models in ChatGPT, including gpt-4o, gpt-4.1-mini, gpt-5, and others.

  • These changes do not affect Grooper’s OpenAI integration at this time.
  • Grooper integrates with the OpenAI API, not ChatGPT.
  • OpenAI API deprecations and recommended replacements are documented here: OpenAI's Depreciations documentation.

Azure OpenAI

Azure maintains General Availability (GA) models for a minimum of 12 months. Refer to the following for more information:

Known issues with certain models

Occasionally, newly released models may not behave the same as previous models when used in Grooper. When this occurs, the table below will be updated with known issues, their impact on Grooper functionality, and any recommended workarounds.

Model name Date reported Notes
gpt-5 1/30/2026 gpt-5 does not support the use of the Temperature parameter when configuring model parameters. Use the default Temperature setting to avoid a Bad Request error.

FYI

If you encounter the "Bad Request (HTTP BadRequest – Bad Request)" error when testing extraction, the selected model may not support one or more configured parameters in Grooper.

Refer to the documentation for your chosen model to verify supported parameters before modifying default settings.