Making an AI Assistant from HTTP Import (Simple Functionality)

From Grooper Wiki
Revision as of 14:53, 20 April 2026 by Randallkinard (talk | contribs) (Created page with "{{AutoVersion}} {|class="download-box" | File:Asset 22@4x.png | You may download the ZIP file below for use in your own Grooper environment (version 2025). This is a Project ZIP file. * [https://www.bisok.com/resources/example-projects/simple-functionality/2025-Project_Making-an-AI-Assistant-from-HTTP-Import.zip 2025-Project_Making-an-AI-Assistant-from-HTTP-Import.zip] |} == Introduction == This article demonstrates how to build a fully functional, domain-specific...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This article is about the current version of Grooper.

Note that some content may still need to be updated.

2025

You may download the ZIP file below for use in your own Grooper environment (version 2025). This is a Project ZIP file.

Introduction

This article demonstrates how to build a fully functional, domain-specific AI Assistant in Grooper by ingesting web content, indexing it, and connecting it as a knowledge resource for conversational use. This article showcases how Grooper can transform publicly available website content into a searchable knowledge base that powers intelligent, context-aware responses.

The intention of this article is to provide a clear, end-to-end example of how several key Grooper features—HTTP Import, Batch Processing, AI Search, and AI Assistants—work together to create a practical AI solution. Instead of relying on pre-existing datasets, this workflow dynamically pulls content from a live website, processes it as documents within a Batch, and prepares it for retrieval through a search index.

The guide walks through configuring the required services and repository options, setting up a search index with vector-based retrieval, importing and processing web pages, and finally connecting that indexed content to an AI Assistant. The result is an assistant capable of answering questions using the imported material as its knowledge source.

By the end of this article, readers will understand how to ingest external web content into Grooper, structure it for search and retrieval, and leverage it within an AI Assistant—demonstrating a powerful pattern for creating custom, knowledge-driven conversational experiences.

Initial setup

This portion of the article ensures the environment is properly configured before any ingestion or AI functionality is used. It begins by verifying that required services—Activity Processing and Import Watcher—are installed and running, which can be confirmed in the Machines folder of the Node Tree on the Design page. These services are essential for executing Batch Processes and handling automated imports.

It also includes configuring Repository Options at the Root node level by adding and setting up both an LLM Connector (for AI-driven functionality) and an AI Search option (for indexing and retrieval). This establishes the foundational connections Grooper needs to interact with external AI models and search infrastructure.

Search Index setup

This section focuses on preparing the Content Model for indexing and enabling search capabilities. It involves configuring the Indexing Behavior, including selecting an embeddings model for vector-based search and understanding how document content will be stored and retrieved.

Once configured, the search index is created and associated with the Content Model. This step ensures that any documents processed later can be added to the index, making them searchable and available as a knowledge resource for the AI Assistant.

Import setup

This portion walks through configuring and submitting an HTTP Import job to ingest web-based content into Grooper. Users define a target website, specify how links are discovered and filtered, and ultimately execute the import.

The goal is to bring external web pages into Grooper as documents in a Batch, where they can be processed just like any other document set. This demonstrates how Grooper can acquire knowledge from online sources and prepare it for downstream processing and indexing.

Batch Process testing

This section focuses on validating the HTTP Ingestion Batch Process by examining and testing each of its steps. Each stage of the process—such as loading content, processing documents, and preparing them for indexing—is reviewed to ensure it is configured correctly and functioning as expected.

By testing individual steps, users gain insight into how imported web content is transformed within Grooper. This includes confirming that documents are properly created, text is accessible, and indexing preparation steps are executed successfully. This approach helps identify and resolve issues before relying on the process in a fully automated workflow.

AI Assistant setup

This final portion explains how to configure an AI Assistant to use the indexed documents as its knowledge resource. It includes selecting an LLM model, assigning the previously created search index as a resource, and optionally refining the assistant’s instructions and behavior.

Once configured, the AI Assistant can be used from the Chat page to ask questions about the imported content. This step demonstrates how Grooper brings everything together—turning ingested and indexed web data into a conversational interface capable of retrieving and presenting relevant information on demand.

Example Questions for the AI Assistant

What are the different branches of government in Oklahoma, and how do they function?
What rights do Oklahoma citizens have that go beyond the U.S. Constitution?
How can the Oklahoma State Constitution be amended?
How has the Oklahoma State Constitution changed over time?
How does the Oklahoma Constitution address public education and school funding?
What provisions exist in the Constitution regarding taxation and economic development?
What are the rules for elections and voter rights in Oklahoma?
What powers does the state government have over cities and counties?

For more information