LLM Connector (Repository Option)

From Grooper Wiki
(Redirected from LLM Connector)

This article was migrated from an older version and has not been updated for the current version of Grooper.

This tag will be removed upon article review and update.

This article is about the current version of Grooper.

Note that some content may still need to be updated.

20252024

LLM Connector is a Repository Option that enables OpenAI-based functionality for the local Grooper repository.

About

The only configurable property on the LLM Connector is Service Providers. This property is a collection of LLM Providers which contain properties that establish a connection an LLM API.

Azure Provider

The main property of concern here is Deployments. This property is a collection one or more Deployments specific to the Azure LLM API.

Chat Service

More information coming soon...

Embeddings Service

More information coming soon...

OpenAI Provider

FYI

In order to connect Grooper to OpenAI webservices, you will need an OpenAI API key.

  1. Create or sign in to an OpenAI account at https://platform.openai.com.
  2. Navigate to the "API keys" page (https://platform.openai.com/api-keys)
  3. Select the "Create new secret key" button. You may optionally name the key.
  4. Make sure to save this somewhere safe and do not share it with anyone.

BE AWARE: You must have a payment method in your OpenAI account to use LLM Constructs (such as AI Extract) in Grooper. If you do not have a payment method, Grooper cannot return a list of models when configuring an LLM Construct's Model property.

URL
This property is the URL at which OpenAI services are hosted. This property will typically be left at the default setting, as most users will use OpenAI's hosted service. If you are hosting a ChatGPT clone, you would use the URL to your local clone.

Authorization
This property controls how the API key is handed to the requested service.

  • None: No API key required.
  • Bearer: "Bearer" authorization is a type of authentication scheme used in HTTP authentication protocols. In this scheme, a "bearer token" is sent in the HTTP header to authenticate requests to protected resources. The token acts as a key, and the server validates it to allow access. The format of the authorization header is typically:
    Authorization: Bearer <token>
This method is widely used in modern web services and APIs, particularly in OAuth 2.0.
  • API Key: This is a "Microsoft-style" API key authentication. "Microsoft-style API key authentication" refers to an authentication method where clients include an API key in the HTTP request header to access Microsoft services or APIs. This key acts as a unique identifier for the client and authorizes their access to the requested resources. The typical format of the authorization header is:
    Ocp-Apim-Subscription-Key: <api-key>
This method is simple to implement and is used to track and control how the API is used.

API Key
This property is the API key to use when accessing web services. This is the main property to consider when connecting to the OpenAI API. Obtain an API key from OpenAI and insert it here.

Use System Messages
This property specifies whether to use "system" messages for contextual information, or use "user" messages for all interaction. This is off by default with the assumption you may choose to connect to LLMs like Mistral or Llama, as they do not use system messages. In the context of large language models, such as those used in conversational AI systems, "system messages" and "user messages" serve distinct roles:

  • System Messages: These are instructions or context provided by the system to guide the behavior and responses of the language model. They often include information about the conversation's tone, style, and specific tasks the model should perform. For example, a system message might specify that the model should respond in a formal tone or prioritize certain types of information.
  • User Messages: These are inputs or queries provided by the user. They represent the questions, commands, or prompts that the user wants the model to respond to. The model processes these messages to generate appropriate and relevant responses based on the given instructions and context.

Together, system and user messages facilitate dynamic and contextually appropriate interactions between the user and the language model.

OpenAI compatible services

Grooper can connect to any LLM service that adheres to the OpenAI API standard using the OpenAI provider.

We have confirmed the following services will integrate using the OpenAI provider:

  • Various models using Groq
  • Various models using OpenRouter
  • Various models hosted locally using LMStudio
  • Various models hosted locally using Ollama

BE AWARE: While all OpenAI is fully compatible with all LLM constructs in Grooper, these services may only have partial compatibility using the OpenAI provider.

How To

Establish an Open AI Provider

  1. Click on the Grooper Root node.
  2. Click the ellipsis button for the Options property.
  3. In the "Options" window click the "Add" button.
  4. Choose "LLM Connector" from the drop-down menu.


  1. Click the ellipsis button for the "Service Providers" property of the newly added LLM Connector.
  2. Click the "Add" button in the "Service Providers" window.
  3. Choose the Open AI Provider option from the drop-down menu.


  1. Enter your API Key into the API Key property.