AI Assistant (Object)

From Grooper Wiki
(Redirected from AI Assistant)

2025 BETA

This article covers new or changed functionality in the current or upcoming beta version of Grooper. Features are subject to change before version 2025's GA release. Configuration and functionality may differ from later beta builds and the final 2025 release.


This article is about the current version of Grooper.

Note that some content may still need to be updated.

2025

AI Assistants are Grooper's conversational AI personas. They answer questions about resources it can access (including content from documents, databases and/or web services). This greatly increases an AI's ability to answer domain-specific questions that require access to these resources.


AI Assistants leverage Retrieval-Augmented Generation (RAG) to provide accurate, domain-specific answers by retrieving and injecting relevant information from connected resources like documents, databases, and web services into chat sessions. They enable natural language queries for seamless interaction with data, reducing the need for extensive upfront extraction or complex syntax.

You may download the ZIP(s) below and upload it into your own Grooper environment (version 2025). The first contains a Project with resources used in examples throughout this article. The second contains a Batch of sample documents. Be sure to load the Project into Grooper before the Batch to ensure the documents in the Batch maintain their classification status.

Due to the proprietary nature of individual network environments, several objects contained in the Project are not configured entirely. Guidance in the article on how to configure these items will be given, but the specifics such as database names and user rights permissions will be subjective to your environment.

About

What is an AI Assistant?

AI Assistants are Grooper's conversational AIs. They define a role to be used in Grooper Chat sessions. Each assistant has access to a collection of user-defined resources. This extends an AI chatbots ability to answer questions on domain specific information contained in documents, databases or retrieved from a web service.

How does a user interact with an AI Assistant?

Users access AI Assistants using the Grooper Chat page. From here, users can select an AI Assistant previously configured in Grooper Design. Users can start new conversations or continue conversations they have previously started. Users will only be able to see their conversations if multiple users are connected to the Grooper Repository.

How do AI Assistants work?

AI Assistants use a technique called “RAG” (retrieval augmented generation) to inject relevant information into a chat conversation. RAG extends a chatbot's “knowledge base” beyond their initial training, allowing users to ask questions specific to resources it has access to. This increases a chatbot's response accuracy without the need to fine-tune the chatbot's AI model with further training. When a user asks a question, the AI Assistant first executes a “retrieval plan”. The AI Assistant analyzes the resources it is connected to, determines which one is best to answer the user's question and what content to retrieve that is relevant to the conversation. This retrieved content (such as a document) is inserted into the conversation as a hidden “system message”. This provides the chatbot with the necessary context to answer the question originally asked by the user.

What resources can AI Assistants connect to?

AI Assistants can connect to the following resources:

  • Search Index References - Allows the AI Assistant to retrieve document text content in an Azure AI Search index. Both metadata search and vector searches are supported.
  • Table References - Allows the AI Assistant to retrieve data from database tables using SQL queries (If the user defined in the Grooper Data Connection has write permissions, the AI Assistant may also write data to the database).
  • Web Service References - Allows the AI Assistant to retrieve data from APIs using web service calls.

The AI Assistants retrieval plan determines which of these resources should be used to respond to the chat. This allows users to query vast amounts of document text (using vector searches in a Search Index Reference), extracted data (stored as metadata in a Search Index Reference) and supplement information in the Grooper Repository with data from external sources (SQL tables and web services). All of this is done with a natural language prompt. No complex syntax required.

What are some benefits to AI Assistants?

AI Assistants provide users with a new way to interact with documents and other resources the AI Assistant can connect to (like databases).

  • Users can search for documents and their data using natural language.
  • Provides on-demand access to data inside documents. Can find information without setting up a Data Model and its extraction logic.
  • Provides near instant time-to-value. Minimal processing is required in Grooper before users can start chatting with a single document or across large document sets.
  • Reduces the need to extract everything up front. Allows users to locate documents without complicated extraction.

The Scaffolding

Before we can properly configure an AI Assistant we need to ensure the required parts are in place. In this section we'll go over all the components that need to be established.

Repository Options

From the Design page of Grooper we need to verify that there is both an appropriately configured AI Search and LLM Connector Repository Option.

For more information please see the following articles:
AI Search
LLM Connector

AI Search

  1. Select the Root node.
  2. Click the ellipsis button on the Options property.
  3. In the “Options” collection editor be sure there is an AI Search option.
  4. Configure the “URL” and “API Key” properties with appropriate information.

LLM Connector

  1. In the “Options” collection editor be sure there is an LLM Connector.
  2. The default settings of the “Service Providers” property should suffice.

Document Source

A Batch with documents needs to exist in Grooper. These documents can be imported from whatever source you have available. Most businesses will probably have historical documents they can pull from some kind of content management system. This article also provides a Batch object for ease of following along.

For more information please see the following articles:
CMIS Connection
CMIS Import

Content Management System

  1. For the creation of this article a SharePoint Document Library was used as a content management system. SharePoint is not a requirement, so you may use whatever content management system you have access to that can connect with Grooper.

CMIS Connection and Repository

  1. If using a CMIS compatible content management system, you will need to create and appropriately configure a CMIS Connection and import its CMIS Repository child object in order to import documents.

Batch in Grooper

  1. If you do not have an appropriate content management system, or simply want to easily follow along with this article, a Batch with appropriate documents was supplied with this article.(Be sure to import it AFTER importing the Project supplied with this article.)

Database Information

Information related to the documents of the associated Batch can be leveraged from a database. Most businesses will probably have historical data in an existing database that can be used. The Batch provided with this article has documents with extracted data. That data can be injected into a newly created database for ease of following along.

For more information please see the following articles:
Data Connection
Data Export

Data Connection

  1. This article provides a Project with a Data Connection.
  2. You will need to configure the connection settings to connect to a database you have access to.
  3. The database used for this article has a “HeaderValues” table with appropriate data.


  1. There is also a “LineItems” table with relevant related data.

Creating a Database

If you wish to follow along using the data provided in this article, you will need to create your own database for use with this information. If you have a database you want to work with, and simply want to add a couple of tables for this data, you can skip ahead to the next section.

  1. Once your Data Connection has saved connection credentials, right-click the Data Connection.
  2. In the pop-out menu choose “Server Actions > Create Database”
  3. In the “Create Database” window name the database and click “Execute”. Be sure to select this database on the “Database Name” property of the Data Connection.

Creating Database Tables

Once your Data Connection is connected to a database you can create tables for it from data supplied with this article.

  1. Once the Data Connection has a saved database connection, right-click the Data Connection.
  2. From the pop-out menu choose “Server Actions > Create Table”.
  3. In the “Create Table” window click the drop-down for the Data Element property and select the Data Model from the Project supplied with this article.
  4. Click the ellipsis button for the Create Statement property, and in the “Create Statement” window change the name of the table in the SQL query to “HeaderValues”
  5. When configuration is complete, click the “Execute” button in the "Create Table" window.


  1. Once the “HeaderValues” table is created right-click the Data Connection.
  2. From the pop-out menu choose “Server Actions > Create Table”.
  3. In the “Create Table” window click the drop-down for the Data Element property and select the “Line Items” table from the Project supplied with this article.
  4. Click the ellipsis button for the “Create Statement” property, and in the “Create Statement” window add and “InvoiceNo” column.

Exporting Data to the Database

Once the tables for the database are created you can export data from the Batch supplied with this article.

Extracted Data

  1. Select the Data Model within the Project supplied with this article.
  2. Click on the “Tester” tab.
  3. From the Batch Viewer, click on the “Batch Select” button and select the Batch supplied with this article. Then, select a document in the Batch.
  4. In the Data Model Preview you will see extracted information for the document, confirming that extraction has been performed on this Batch.


Export Behavior

  1. Select the Content Model from the Project supplied with this article.
  2. Click the ellipsis button on the Behaviors property.
  3. In the “Behaviors” collection editor you will need to add an “Export” behavior.
  4. In the “Export Definitions” collection editor you will need to add a “Data Export” definition. It should connect to the appropriate Data Connection and have two table mappings for the “HeaderValues” and “LineItems” tables.
  5. Configure the “HeaderValues” table mapping to point at the appropriate Data Connection and Target Table and set the appropriate column mappings.
  6. Configure the “LineItems” table mapping to point at the appropriate Data Connection and Target Table and set the appropriate column mappings.


Export Activity

  1. Select the Batch Process Step configured for Export from the Project that is supplied with this article.
  2. Click on the "Activity Tester" tab.
  3. In the Batch Viewer click the “Select Batch” button and select the Batch supplied with this article. Select the top-most Batch Folder.
  4. Click the “Submit Job” button to process export for all documents in this Batch.


Verifying Exported Data

  1. To verify exported data, select the Data Connection that was configured previously.
  2. Click the "List Tables" button.
  3. Select one of the listed tables and you should see the exported data in the Data Preview.

Search Index

An AI Search index is required for the AI Assistant to be able to retrieve document full text, metadata, and to be able to display a document associated with a query to a user when interacting with the AI Assistant on the Chat page. If you already have an AI Search index created with an appropriate Indexing Behavior and have supplied it data, you can skip this section and move on to next section to see how an AI Assistant leverages this resource. If you wish to follow along with the contents of this article you can follow along with the steps below.

For more information please see the following article:
AI Search

Indexing Behavior

  1. Select the Content Model from the Project supplied with this article.
  2. Click the ellipsis button for the Behaviors property.
  3. In the “Behaviors” collection editor you will need to add an Indexing Behavior to the collection of Behaviors.
  4. On the Name property you will need to supply a name of an existing AI Search index, or create a name for a new AI search index.
  5. On the “Included Elements” property you should select all relevant data elements, which for this setup will be all elements.
  6. To allow for the AI Search index to index the full text of the documents, you will need to enable “Vector Search” and select an appropriate model for the “Embeddings Model” property. “text-embedding-ada-002” will work for this setup.

Create Search Index

  1. If you need to create a new AI Search index, right-click on the Content Model.
  2. From the pop-out menu select “Search > Create Search Index”. This will create a new AI Search index with the name given on the Name property of the Indexing Bheavior.

Add Data to Search Index

  1. If you need to add documents and their data to the AI Search index, select the appropriate Batch.
  2. Click on the “Viewer” tab.
  3. Select all the documents in the Batch and right-click one of the selected documents.
  4. From the pop-out menu choose “Search > Add to Index”.

Creating and Configuring the AI Assistant

Creating an AI Assistant

Creating an AI Assistant is very straight forward. However, if you want to follow along with this article you can use the AI Assistant in the Project supplied with this article.

  1. Right-click on a Project.
  2. From the pop-out menu select “Add > AI Assistant”.
  3. In the “Add” window provide a name for the AI Assistant.
  4. Click the "Execute" button.

Configuring an AI Assistant

Property Overview

Model
Specifies the AI model used for processing chat interactions. The selected model determines the assistant's language capabilities, response quality, and processing efficiency. Different models may offer varying levels of accuracy, response speed, and context retention. Choosing a model should be based on the complexity of user queries, the volume of data being processed, and the required level of detail in responses. More advanced models may provide better reasoning and retrieval-augmented generation (RAG) performance but could have higher processing costs. Ensure the selected model aligns with the assistant's intended use case and available resources.

Click the ellipsis button for this property to display a window with a list of options. "gpt-4o-mini" is the model used for the configuration of this article.


Parameters
This property allows you to tune how the LLM completes operations. Setting this to "Override" will expose a series of sub-properties to adjust settings of: Temperature, TopP, Presence Penalty, and Frequency Penalty.

Please see the Parameters article for more information.


Instructions
Defines the core behavioral and response guidelines for the AI Analyst. These instructions shape how the assistant interprets user queries, structures its responses, and maintains consistency in tone and accuracy. The instructions should specify the AI's role, scope of knowledge, preferred response format, and any constraints on the type of information it provides. Clear and well-crafted instructions ensure that the assistant remains focused on its intended use case, aligns with business objectives, and delivers helpful, relevant, and reliable answers.

To ensure the AI Assistant provides accurate, relevant, and structured responses, follow this framework when writing the Instructions property:

  • Define the AI's Role and Purpose
    • Explain what the AI Analyst is designed to do, including its scope of expertise and primary function.
    • Example:
      • "You are an AI Assistant specializing in processing and analyzing invoice-related data, providing users with accurate, structured, and contextual responses regarding invoices, line items, and associated records."
  • Set Expectations for Response Formatting
    • Describe how the assistant should format its responses for clarity and usability.
    • Example:
      • "Provide structured responses with clearly labeled details. When listing multiple results, present them in a tabular or bullet-point format. If numerical calculations are required, show the formula used. If a query is ambiguous, ask clarifying questions before responding."
  • Define the Tone and Style
    • Indicate whether the AI should be formal, professional, conversational, or technical in its responses.
    • Example:
      • "Maintain a professional yet approachable tone. Keep responses concise and informative, avoiding overly technical jargon unless necessary. If needed, explain complex concepts in simple terms."
  • Outline Handling of Incomplete or Unavailable Data
    • Guide the AI on how to handle missing or unclear information.
    • Example:
      • "If data is incomplete or unavailable, inform the user rather than making assumptions. Offer suggestions on how they might refine their query or check related sources."
  • Set Constraints and Avoidances
    • Define any restrictions, such as prohibiting speculation, refusing certain types of requests, or avoiding unnecessary details.
    • Example:
      • "Do not provide financial advice, speculate on missing data, or generate responses beyond the scope of invoice records. Stick to factual information retrieved from available resources."
  • Provide Examples of Expected Behavior (Optional but useful)
    • Including a few sample interactions can help reinforce the intended AI behavior.
    • Example:
      • User: "How much is invoice 12345?"
      • AI Assistant: "Invoice 12345 has a total amount of $5,678.90. It was issued to ABC Corporation on January 5, 2025. Would you like to see the itemized breakdown?"

Provided below are the instructions used for the AI Assistant example in the Project provided with this article.

Your primary role is to assist users in answering questions related to customer invoices.
Provide accurate, concise, and clear explanations tailored to the user's query.
Use professional and approachable language while ensuring responses are relevant to the specific invoice data available.
When necessary, clarify incomplete questions by requesting additional information.
Do not provide information beyond the scope of customer invoices or speculate on unknown data.
If appropriate, guide users to related resources or steps they can take to resolve their queries.


Resources
Defines the set of external data sources the AI Assistant can access during chat sessions. These resources enable the assistant to retrieve relevant information beyond its built-in knowledge, improving response accuracy and contextual relevance. The available resource types include:

  • Search Index References: Provides access to indexed document repositories for retrieving unstructured text and metadata.
  • Table References: Enables structured data retrieval from database tables, such as invoice records or transaction details.
  • Web Service References: Connects the assistant to external APIs for real-time data, such as payment statuses or customer records.

The AI Assistant dynamically selects and queries these resources based on the retrieval plan, ensuring users receive the most relevant and up-to-date information. Proper configuration of these resources enhances the assistant's ability to perform multi-hop retrieval, cross-referencing structured and unstructured data seamlessly.

Search Index Reference Properties

  • Index Name
    • Specifies the name of the search index that the AI Assistant can query for retrieving relevant document data. The search index acts as a structured repository of unstructured text and metadata, enabling efficient document lookup and retrieval. This property must match the exact name of an existing index configured in the connected search service (e.g., ai_assistant_index). The assistant uses this index to find documents based on metadata searches, keyword queries, or vector-based similarity searches, depending on the retrieval plan. Ensure that the index contains well-structured and properly tagged data to optimize retrieval performance and accuracy.
  • Enable Vector Search
    • Determines whether the AI Assistant can perform vector-based search queries against this search index. Vector search allows for semantic similarity retrieval, enabling the assistant to find relevant documents even if the exact keywords are not used. This is particularly useful for natural language queries, as it enhances the AI's ability to understand intent and context beyond simple keyword matching.
    • For this feature to work, 'Vector Search' must be enabled in the Indexing Behavior settings of the associated search index. The index must also support vector embeddings, typically generated using AI models that encode text into numerical representations.
    • Enabling vector search can improve result accuracy for complex queries but may require additional computational resources. If disabled, only traditional keyword and metadata searches will be available.
  • Enable Standard Search
    • Determines whether the AI Assistant can perform standard keyword-based search queries against this search index. A standard search operates similarly to manual searches performed through the Search Page, relying on keyword matching, Boolean operators, and metadata filters to retrieve relevant documents.
    • Enabling this feature allows the assistant to find documents based on exact or partial keyword matches within indexed fields. This method is efficient for structured queries but may not capture nuanced intent as effectively as vector search.
    • For optimal performance, ensure the search index is well-configured with relevant metadata fields, optimized tokenization, and filtering options to improve the accuracy and relevance of standard search results. If both Standard Search and Vector Search are enabled, the AI Assistant may use hybrid retrieval strategies to provide the best possible response.
  • Filter
  • Description
    • Provides contextual information about the search index to help the AI Assistant understand its purpose, the type of data it contains, and how it relates to other resources. This description should clearly define the scope of the index, including whether it stores invoice documents, contracts, reports, or other business records.
    • Additionally, specify how the AI should use this resource in relation to Table References (SQL databases) and Web Service References (APIs). For example, if the search index contains invoices, mention how the 'InvoiceNo' field in the index relates to the '[InvoiceNo]' column in SQL tables to facilitate multi-hop retrieval.
    • A well-structured description improves retrieval accuracy, ensuring the AI Assistant selects the most relevant resource when answering user queries.

Provided below is a screenshot of the configuration of the Search Index Reference used in this article, as well as the contents of the Description property.

This resource contains invoice documents stored in a SharePoint-integrated Azure AI Search Index, referred to as 'SharePoint' by users.
The index exclusively contains test documents dated for the year 2025, specifically created for testing scenarios with future-dated content.
Use a metadata search when locating invoice numbers, ensuring the 'InvoiceNo' field in the index matches the [InvoiceNo] column in SQL queries.
This index is best utilized for retrieving document-level details or metadata related to invoices, such as invoice dates, amounts, vendor, and associated line items.
Avoid full-text searches unless required, and prioritize metadata fields for precise results.
When cross-referencing with other resources like SQL tables, always validate relationships using the invoice number to maintain consistency.
Clearly document assumptions or limitations when interpreting results, and note that all data retrieved from this index is for test purposes only and does not reflect real-world transactions.


Table Reference Properties

  • Connection
    • Specifies the Data Connection that links the AI Assistant to the database containing the referenced table. This connection defines the database server, authentication method, and access permissions required to query structured data.
    • The selected Data Connection must be properly configured to ensure secure and efficient data retrieval. If the database contains multiple tables, ensure that the appropriate table is exposed through the Table Reference.
    • The AI Assistant will use this connection to execute structured queries, such as retrieving invoice records, customer details, or transaction histories. Proper indexing and query optimization within the database can improve retrieval speed and accuracy.
  • Table Name
    • Specifies the name of the database table that the AI Assistant can query for structured data retrieval. This table must exist within the database defined by the selected Data Connection.
    • The tables displayed in the pick-list are tables defined from their respective Data Connection. The AI Assistant will use this table to fetch relevant data, such as invoice details, customer records, or transaction line items.
    • When integrating multiple Table References, clearly define how this table relates to others (e.g., using foreign keys like [InvoiceNo] to join with a line items table). Optimized indexing and structured queries can improve retrieval efficiency.
  • Description
    • Provides a clear description of the database table to help the AI Assistant understand its purpose, the type of data it contains, and how it relates to other resources.
    • Use this description to specify:
      • The table's primary function (e.g., 'This table stores invoice header details, including invoice numbers, dates, and total amounts.')
      • Key fields and relationships with other tables (e.g., 'The [InvoiceNo] column links to [LineItems].[InvoiceNo] for retrieving associated line items.')
      • How the AI should use this table (e.g., 'Use this table for structured invoice data before searching the document repository for supporting files.')
    • A well-defined description improves retrieval accuracy, ensuring the AI selects the correct table when answering user queries and optimizing multi-hop retrieval strategies.

Provided below are screenshots of the configuration of both the "HeaderValues" and "LineItems" Table References, as well as the contents of their respective Description properties.

This table contains invoice-related data extracted from the SQL database, with each row representing a single invoice.
Commonly referred to as 'SQL' or the 'invoices database' by users, it serves as the primary source for high-level invoice details, including invoice numbers, dates, and total amounts.
The [InvoiceNo] column acts as the primary key and can be used to join with other tables, such as [LineItems], to retrieve detailed line item data.
Use this table as the starting point for queries involving invoice summaries or header-level data.
When filtering or querying, prioritize fields directly relevant to the user's request to optimize retrieval efficiency.
Clearly note that this table provides structured data that complements unstructured document data in other resources like the Search Index.
Always validate cross-references between resources using the [InvoiceNo] column to ensure consistency and accuracy in responses.


This table stores detailed line-item data from the SQL database for invoices recorded in the [HeaderValues] table.
Each row represents a single line item associated with an invoice, providing granular details about the goods or services sold.
Users may refer to this table as 'SQL' or the 'invoices database.'
Key columns include:
[Description]: Describes the item or service sold.
[Quantity]: Indicates the number of units sold for the item.
[Rate]: Specifies the unit price charged for the item.
[Amount]: Represents the total cost for the line item, calculated as [Quantity] × [Rate].
The [InvoiceNo] column serves as a foreign key, linking each line item to its corresponding invoice in the [HeaderValues] table.
This table is ideal for retrieving item-level details, performing calculations such as totals or averages, or generating comprehensive invoice breakdowns.
For optimal performance, ensure queries filter by [InvoiceNo] or other relevant criteria to reduce unnecessary data retrieval.
Use this table in conjunction with the [HeaderValues] table when both high-level invoice data and detailed item information are required.
Always validate cross-references to maintain consistency and accuracy in responses.


Web Service Reference Properties

  • Definition File
    • Specifies the RAML (RESTful API Modeling Language) definition file that describes the structure, endpoints, parameters, and authentication requirements of the web service. This file acts as a blueprint for how the AI Assistant interacts with the web service, ensuring it can correctly format requests and interpret responses.
    • The RAML definition must accurately reflect the API's capabilities, including:
      • Available endpoints (e.g., /invoices/{id}, /payments/status)
      • Required parameters (e.g., InvoiceNo, CustomerID)
      • Authentication methods (e.g., API keys, OAuth)
      • Expected response structures (e.g., JSON, XML formats)
    • Properly defining this file enables the AI Assistant to retrieve external data seamlessly, such as payment statuses, customer details, or other dynamic information that complements structured database and document resources.
  • Title
    • Specifies the name used to identify this web service definition within the AI Assistant. This title should be clear, concise, and descriptive to ensure users and the AI can easily recognize its purpose.
    • Best practices for naming:
      • Use a name that reflects the service's function (e.g., 'Invoice Payment API' or 'Customer Account Lookup').
      • Keep it consistent with other resource names for clarity when combining multiple data sources.
      • Avoid overly generic names—ensure it clearly distinguishes this web service from others in the system.
    • This title is referenced internally when constructing retrieval plans, allowing the AI Assistant to determine when and how to call this web service for relevant data.
  • Headers
    • Defines the HTTP headers that should be included in each request sent to the web service. These headers provide essential metadata, authentication details, and request-specific parameters that influence how the API processes requests.
    • Each header should be written on a separate line using the format:
      • Key: Value
    • Common Headers:
      • Authentication: Authorization: Bearer <token> (for OAuth-based APIs)
      • Content Type: Content-Type: application/json (specifies request body format)
      • Custom Headers: X-API-Key: <your_api_key> (for API key authentication)
    • Best Practices:
      • Ensure required authentication headers are set to avoid access errors.
      • Use appropriate content types (application/json, application/xml, etc.) based on the API's expected format.
      • Verify case sensitivity—some APIs may require exact casing for header keys.
    • Properly defining headers ensures seamless communication with the web service, improving the accuracy and security of API calls.
  • Authentication
    • Specifies the authentication method required to access the web service, ensuring secure communication between the AI Assistant and the API. The selected method determines how credentials are provided with each request.
    • Available Authentication Methods:
      • Basic Authentication
        • Uses a username and password encoded in Base64.
        • Included in the request header as:
          • Authorization: Basic <Base64EncodedCredentials>
        • Suitable for simple authentication scenarios but less secure than OAuth.
      • OAuth Client Credentials
        • Uses a client ID and client secret to obtain an access token.
        • Follows the OAuth 2.0 client credentials flow, where the AI Assistant authenticates as itself rather than on behalf of a user.
        • Typically requires a request to a token endpoint before calling the main API.
    • Best Practices:
      • Use OAuth for APIs that require token-based authentication and secure access control.
      • Ensure credentials (username/password or client ID/secret) are securely stored and not hardcoded.
      • Verify that the authentication method matches the API's security requirements.
    • Proper authentication configuration is essential for successful API calls and data retrieval.
  • Timeout
    • Specifies the maximum amount of time (in seconds) the AI Assistant will wait for a response from the web service before triggering a timeout error.
    • How It Works:
      • If the web service does not respond within the specified time, the request is automatically terminated to prevent excessive waiting.
      • A timeout error may indicate network latency, server issues, or an unresponsive endpoint.
    • Best Practices for Setting Timeout Values:
      • Shorter timeouts (2-5 seconds): Use for high-speed APIs where immediate response is expected.
      • Moderate timeouts (10-30 seconds): Suitable for APIs that perform complex queries or interact with databases.
      • Longer timeouts (30+ seconds): Use only for APIs known to have delayed processing, such as those generating reports.
    • If timeouts occur frequently, consider optimizing API performance, checking network connectivity, or increasing the timeout value as needed.
  • HTML Body Selector
    • Specifies an optional CSS selector used to identify the root element of an imported HTML document when processing responses from the web service. This allows the AI Assistant to extract relevant content from structured HTML responses while ignoring unnecessary elements such as headers, footers, or navigation menus.
    • How It Works:
      • If set, only the content within the selected HTML element will be processed.
      • Uses CSS selector syntax to target specific elements (e.g., #content, .main-body, article).
      • Helps filter out irrelevant data from API responses that return full-page HTML documents.
    • Examples of Selectors:
      • Extracting the main article content:
        • article (selects the <article> tag content)
      • Selecting a specific div by ID:
        • #invoice-details (selects
          content only)
      • Targeting a class-based container:
        • .main-body (selects
          content only)
    • Best Practices:
      • Use precise selectors to avoid unintended content extraction.
      • If unsure, inspect the web service's HTML response structure to determine the most relevant element.
      • Leave blank if full HTML content should be processed.
    • Properly setting this property ensures that the AI Assistant retrieves only the most relevant sections of HTML responses, improving data accuracy and reducing processing overhead.
  • Description
    • Provides a clear and structured explanation of the web service to help the AI Assistant understand its purpose, the type of data it retrieves, and how it relates to other resources in the system.
    • Key Elements to Include:
      • Functionality: Describe what the web service provides (e.g., 'Retrieves real-time invoice payment statuses from an external accounting system.').
      • Key Data Fields: Mention important response data (e.g., 'Returns invoice numbers, payment dates, and outstanding balances.').
      • Relationships: Explain how this web service connects with Table References and Search Index References (e.g., 'The [InvoiceNo] from this service aligns with [HeaderValues].[InvoiceNo] in the SQL database.').
      • Best Use Cases: Indicate when and why the AI should query this service (e.g., 'Use this service when a user asks about the latest invoice payment status or outstanding balances.').
    • A well-defined description improves retrieval accuracy by helping the AI Assistant determine when and how to query the web service alongside other resources.


Additional Tools
Defines a set of supplementary retrieval tools that the AI Assistant can utilize during chat sessions to enhance its ability to retrieve, clarify, and present information. These options allow the assistant to dynamically expand its knowledge and refine responses based on available resources.

Available Options:

  • None - No additional retrieval tools are enabled. The assistant relies solely on predefined resources (Search Indexes, Table References, Web Service References).
  • Ask User - Enables the assistant to request clarification when a user's inquiry is ambiguous, helping refine queries for more accurate responses.
  • Help Search - Grants the assistant access to Grooper help topics and wiki articles, improving its ability to answer questions related to Grooper functionality.
  • Load Schema - Allows the assistant to retrieve structural metadata (schemas) from search indexes, web services, and database tables within its scope. Useful for answering questions about data models and relationships.
  • Load Web Page - Enables the assistant to retrieve and process content from web pages, allowing it to pull in real-time or external information from online sources.
  • All - Activates all additional retrieval tools, maximizing the assistant's ability to access, interpret, and retrieve relevant data.

Best Practices:

  • Enable "Ask User" if users frequently submit vague or incomplete queries.
  • Use "Help Search" when the AI needs to provide documentation-based assistance for Grooper functionality.
  • "Load Schema" is useful when users ask for database structures, table relationships, or indexing details.
  • Be mindful when enabling "Load Web Page", as it may introduce external dependencies and longer response times.
  • If full retrieval capabilities are required, select "All" to maximize available resources.

Selecting the appropriate combination ensures the AI Assistant delivers more contextually aware, precise, and resource-driven responses.


Retrieval Instructions
Defines the guidelines for how the retrieval plan generator selects, prioritizes, and queries connected resources to gather relevant information. These instructions help the AI Assistant determine which data sources (e.g., Search Index, SQL tables, or web services) should be used, in what order, and how relationships between resources should be handled. Well-crafted retrieval instructions ensure accurate, efficient, and contextually appropriate responses by specifying constraints, preferred query structures, and multi-hop retrieval logic. Optimized retrieval logic improves performance, reduces irrelevant results, and enhances response accuracy.

To ensure the AI Assistant efficiently retrieves and integrates relevant data, follow this framework when defining Retrieval Instructions. These guidelines help the retrieval plan generator determine which resources to query, in what order, and how to handle relationships between different data sources.

  • Define the Primary Data Sources & Their Purpose
    • Clearly specify which connected resources should be used and what type of data they contain.
    • Example:
      • "Use the SQL database ([HeaderValues] and [LineItems]) for structured invoice data, including invoice metadata, line items, and total amounts. Use the Search Index for retrieving invoice-related documents stored in SharePoint. Use the Web Service Reference for external API lookups related to invoice payments."
  • Specify Query Prioritization & Resource Selection
    • Indicate the order in which resources should be queried and the conditions for selecting the best resource.
    • Example:
      • "When retrieving invoice details, first check [HeaderValues] for header-level data. If line item details are needed, query [LineItems] using the [InvoiceNo] key. If document context is required, search the Search Index using the 'InvoiceNo' field. External web service queries should only be used if invoice payment status is requested."
  • Define Multi-Hop Retrieval Logic (Chained Queries)
    • Explain how related queries should be executed across multiple resources.
    • Example:
      • For Invoice Related Queries:
        • Retrieve invoice metadata from [HeaderValues] (invoice number, date, total amount).
        • Use the invoice number to retrieve associated line items from [LineItems].
        • If a related document exists in the Search Index, retrieve it by matching the 'InvoiceNo' field.
        • If the user asks about payment status, query the Web Service Reference after retrieving structured invoice data."
  • Specify Query Syntax & Formatting Best Practices
    • Define how queries should be structured for each resource.
    • Example:
      • "When querying SQL tables, always enclose table and column names in square brackets (e.g., SELECT [InvoiceNo], [Amount] FROM [HeaderValues]). When performing metadata searches in the Search Index, use exact matches for invoice numbers but allow partial matches for customer names. Limit queries to only necessary columns to improve performance."
  • Handle Data Joins & Relationships
    • Describe how different data sources relate to each other and how queries should be structured for joins.
    • Example:
      • "Join [HeaderValues] and [LineItems] using [InvoiceNo] = [HeaderValues].[InvoiceNo]. Ensure that when retrieving line item details, results are grouped by invoice number. When cross-referencing with the Search Index, relate the 'InvoiceNo' metadata field to the [InvoiceNo] SQL column."
  • Address Filtering & Optimization
    • Provide guidance on reducing irrelevant results and optimizing retrieval efficiency.
    • Example:
      • "Filter invoices by date range when retrieving from [HeaderValues] to avoid excessive data. Use TOP 10 when returning multiple invoices unless the user explicitly requests all results. Avoid unnecessary full-text searches in the Search Index unless metadata searches yield no results."
  • Define Error Handling & Fallback Logic
    • Specify what the AI should do when data is missing or retrieval fails.
    • Example:
      • "If an invoice number is not found in [HeaderValues], check the Search Index for a related document before returning 'No results found.' If a query to the Web Service Reference fails, inform the user and suggest checking back later."
  • Provide Examples of Expected Retrieval Behavior (Optional but Useful)
    • Demonstrate how different queries should be executed across resources.
    • Example:
      • User Query: "Show me the details for invoice 12345, including items and document reference."
      • Retrieval Plan:
        • Retrieve invoice metadata from [HeaderValues].
        • Retrieve line items from [LineItems] using [InvoiceNo].
        • Search the Search Index for a document matching 'InvoiceNo' = '12345'.
        • Return the combined response with structured invoice data, line item details, and a document reference if available.

Provided below are the instructions used for the AI Assistant example in the Project provided with this article.

Ensure all table names and column names are enclosed in square brackets, including columns used in functions (e.g., SUM([Column Name])).
Always use clear and logical joins when combining tables, such as joining [HeaderValues] with [LineItems] based on [LineItems].[InvoiceNo] = [HeaderValues].[InvoiceNo].
Prioritize Table References for accurate and efficient data collation.
Optimize queries by retrieving only the necessary columns to minimize redundancy and improve performance.
When filtering data, use precise conditions to ensure relevance.
If the retrieval involves multiple tables, follow a hierarchical approach, starting with primary data sources (e.g., [HeaderValues]) before incorporating details (e.g., [LineItems]).
Document any assumptions made during retrieval, and avoid ambiguous queries to ensure accurate and interpretable results.
When users ask to view documents from "Sharepoint", leverage the search index reference to display documents from the appropriate Batch in Grooper.


Max Retrieval Depth
This is the maximum number of retrieval operations which may be chained together in response to a user inquiry.

Chained retrieval (aka multi-hop retrieval) involves performing multiple sequential retrieval steps, where each step refines the query or retrieves additional information based on the previous results. This is particularly useful when answering complex queries that require combining information from multiple sources or contexts.

When set to a value of 1, disables multi-hop retrieval entirely.

Multi-hop retrieval is a technique used in Retrieval-Augmented Generation (RAG) systems where the AI Assistant retrieves information from multiple related sources or performs sequential retrieval steps to answer a query more effectively. Instead of pulling data from a single resource in isolation, multi-hop retrieval allows the assistant to make connections between different sources to provide more comprehensive and accurate responses.

How Multi-Hop Retrieval Works

  • First-Hop Retrieval: The assistant retrieves an initial set of relevant documents, records, or data from a primary resource (e.g., an invoice database or document index).
  • Second-Hop Retrieval: Based on the information retrieved in the first step, the assistant performs a secondary retrieval to gather related details from another source (e.g., invoice header data from a SQL database followed by retrieving line items from another table).
  • Contextual Integration: The retrieved data from both (or multiple) hops is combined and injected into the conversation to provide a well-informed response.

Example in the Context of the AI Assistant
If a user asks: "What is the total amount for invoice number 12345, and what items were purchased?"

  • First-Hop Retrieval: The assistant queries the [HeaderValues] table to retrieve the invoice's total amount.
  • Second-Hop Retrieval: Using the retrieved invoice number, it queries the [LineItems] table to list the purchased items.
  • Response: The assistant combines the data and presents a unified answer.

Keep in mind the tables referenced above are specific to the example used in this article.


Benefits of Multi-Hop Retrieval

  • More Comprehensive Answers: Enables the assistant to retrieve related data points from different sources.
  • Improved Accuracy: Reduces the risk of incomplete or misleading responses.
  • Efficient Query Execution: Optimizes data retrieval by only pulling necessary details at each step.
  • Better Use of Structured and Unstructured Data: Can merge structured data from databases with unstructured text from document indexes.

How It Applies to the AI Assistant
Since the AI Assistant is connected to Search Index References, Table References, and/or Web Service References, it can use multi-hop retrieval to:

  • Find invoice metadata in a SQL table and then fetch the corresponding document in Azure AI Search.
  • Retrieve invoice headers from [HeaderValues], then retrieve item details from [LineItems].
  • Query external APIs for additional invoice-related data after retrieving basic details from SQL.

Keep in m ind the tables referenced above are specific to the example used in this article.


Access List
Defines a restricted list of users who are allowed to interact with this AI Assistant. If set, only the specified users will have access to chat with the assistant. If left unset, all authenticated users within the system will be able to initiate chat sessions.

How It Works:

  • Users added to the Access List will have exclusive permission to engage with the assistant.
  • If a user not on the list attempts to interact, they will be denied access.
  • Useful for limiting access based on roles, departments, or security requirements.

Best Practices:

  • Use this property to control AI access for sensitive data, such as financial or proprietary information.
  • If restricting access, ensure the list is updated regularly to reflect personnel changes.
  • If the assistant is intended for general use, leave this property unset to allow all authenticated users access.

Proper configuration of the Access List ensures compliance with security policies and prevents unauthorized interactions with the AI Assistant.


Bot Connector
Enables integration between this AI Assistant and an Azure Bot Service, allowing users to interact with the assistant through Microsoft’s bot framework. This connection enables deployment across external platforms such as Microsoft Teams, Web Chat, Direct Line, or other Azure-supported channels.

Proper configuration of the bot connector ensures secure communication and optimal performance when extending the AI Assistant beyond the Grooper Chat interface. Below are the key sub-properties for setting up the integration:

  • Bot ID
    • The unique identifier (client ID) of the Azure Bot Service. This is required for OAuth Client Credentials authentication, allowing the AI Assistant to authenticate and interact with external platforms.
    • Must match the client ID registered in the Azure Bot Service.
    • Used in combination with the Bot Password (client secret) for authentication.
    • Ensure that it is kept secure to prevent unauthorized access.
  • Bot Password
    • The client secret (Bot Password) used for OAuth Client Credentials authentication. This allows the AI Assistant to securely authenticate with the Azure Bot Service.
    • Required for the bot to generate OAuth tokens and maintain secure API communication.
    • Must be securely stored and rotated periodically according to security best practices.
    • If the bot is unable to authenticate, verify that the password has not expired in Azure.
  • Log Trace Events
    • If enabled (set to true), this option logs trace events for each chat activity in the Grooper log, providing visibility into interactions and potential debugging insights.
    • Useful for troubleshooting issues related to bot interactions, message delivery, or authentication failures.
    • Can be monitored in the Grooper log viewer for detailed event tracking.
    • May generate large logs in high-traffic environments, so enable only when needed for debugging or analytics.
  • Document Links
    • Determines how Grooper documents should be referenced in AI Assistant responses when providing document-related answers.
    • Available Options:
      • None - Document hyperlinks are not provided in AI responses.
      • Direct - The AI provides public download links to documents, allowing external access.
      • InApp - Document links open directly in the Grooper UI, ensuring access is restricted to authenticated users.
    • Best Practices:
      • Use None if document references are not required.
      • Use Direct only if documents need to be shared externally and permissions allow it.
      • Use InApp for internal users needing direct access via the Grooper interface.
  • URL Path
    • The specific URL where the Azure Bot Service should post chat activities for this assistant. This ensures the bot can relay messages between external platforms and Grooper.
    • Must be publicly accessible from Azure Bot Service.
    • Ensure that firewall rules allow incoming web service calls from Azure to the Grooper server.
    • If the bot is not receiving messages, verify that the URL Path is correctly configured and accessible.

Best Practices for Setting up the Bot Connector

  • Ensure Bot ID and Bot Password match the credentials registered in Azure.
  • Enable Log Trace Events for debugging during setup, then disable it to reduce log size.
  • Choose the appropriate Document Links setting based on data access policies.
  • Verify firewall and network settings allow communication between Azure and Grooper.
  • Regularly rotate client secrets and update them in Grooper to maintain security.


Stream Response
Determines whether the AI Assistant delivers responses incrementally (streaming) or waits until the entire response is generated before displaying it.

How It Works:

  • If enabled (true): The assistant streams the response in parts, allowing users to see the answer as it is being generated in real-time.
  • If disabled (false): The assistant waits until the entire response is complete before displaying it.

Benefits of Streaming Responses:

  • Faster user experience - Users start seeing results sooner, reducing perceived wait times.
  • Improved interactivity - Ideal for long or complex responses, where users can begin reading while the AI continues generating.
  • Better usability for slower queries - Useful when retrieval involves multi-hop searches, database queries, or document lookups.

Best Practices:

  • Enable Stream Response for interactive, conversational AI experiences.
  • Disable it if responses need strict formatting or structured presentation (e.g., JSON output, tables).
  • If retrieval operations take significant time (e.g., database queries), streaming can improve perceived responsiveness.

Enabling streaming enhances the real-time feel of interactions, improving the overall chat experience.

The Chat page: Using the AI Assistant

  1. Go to the Grooper Chat page.
  2. Select an AI Assistant to use from the drop-down menu found in the top-right of the page.
  3. Click the "Add Chat" button found near the top-left of the page, under the navigation links.
  4. A new chat will be added to the column of available chats on the left side of the page.
  5. Enter messages to the chatbot in the area near the bottom of the page.
  6. Messages and responses will be displayed in the area above the where you typed your messages. You can mouse over individual messages and/or responses to copy or delete them.
  7. Several buttons above the user chat panel provide functionality:
    • Activate voice reognition.
    • Enable or disable automatic narration of AI responses.
    • Toggle between HTML and plain text view for messages.
    • Show or hide system messages.
    • View diagnostics for the previous response.
    • Copy this coversation.
    • Clears the current chat session.
    • Send the current chat message.


  1. When system messages are on you can expand the Retrievlan Plan and see the JSON generated.
    • Mouse-over a systsem message and click the "expand" button.


  1. When system messages are on you can also view the SQL query used in the Retrieved Query Results
    • Mouse-over a systsem message and click the "expand" button.


  1. When a message contains a link to a document you can click to expand the "Footnotes" and then click the link to the document to display it in the Document Viewer.