2023:GPT Complete (Value Extractor): Difference between revisions

From Grooper Wiki
No edit summary
No edit summary
 
(5 intermediate revisions by 2 users not shown)
Line 1: Line 1:
__NOINDEX__
{{AutoVersion}}
{{AutoVersion}}


Line 13: Line 14:


== About ==
== About ==
''GPT Complete'' is an '''''Extractor Type''''' that was added to Grooper 2023. It is the setting you choose to leverage GPT integration on an extractor. Below are some examples of configuration and use. You should be able to follow along using the '''GPT Integration''' zip files ('''Batch''' and '''Project''' are included) that are included in this article. Begin by following along with the instructions. The details of the properties will be explained after.
''GPT Complete'' is a '''''Value Extractor''''' that was added to Grooper 2023. It is the setting you choose to leverage GPT integration on an extractor. Below are some examples of configuration and use. You should be able to follow along using the '''GPT Integration''' zip files ('''Batch''' and '''Project''' are included) that are included in this article. Begin by following along with the instructions. The details of the properties will be explained after.


It is also worth noting that the examples given below ARE NOT a comprehensive list. Provided are only a few examples of prompts used in extraction to get you thinking about what can be done. It is ''highly'' recommended that you not only reference the materials linked above, but also spend time experimenting and testing. Good luck!
It is also worth noting that the examples given below ARE NOT a comprehensive list. Provided are only a few examples of prompts used in extraction to get you thinking about what can be done. It is ''highly'' recommended that you not only reference the materials linked above, but also spend time experimenting and testing. Good luck!


=== Extractor Type Properties ===
=== Extractor Properties ===
Before moving on to seeing how the GPT model is used for classification in '''Grooper''' let's take a look at the properties used in the ''GPT Complete'' extractor type.
Before moving on to seeing how the GPT model is used for classification in '''Grooper''' let's take a look at the properties used in the ''GPT Complete'' extractor.


[[image:2023_GPT-Complete-(Extractor-Type)_01_01.png]]
[[image:2023_GPT-Complete-(Extractor-Type)_01_01.png]]
Line 29: Line 30:


==== Parameters ====
==== Parameters ====
===== Temperature =====
Please see the [[Parameters (Property)|Parameters]] article for more information.
In the context of text generation using language models like ChatGPT, the temperature parameter is a setting that controls the randomness or randomness of the generated text. It is used during the sampling process, where the model selects the next word or token to generate based on its predicted probabilities.
 
When generating text, the language model assigns probabilities to different words or tokens based on their likelihood of occurring next in the context of the input text. The temperature parameter is used to scale these probabilities before sampling from them. A higher temperature value (e.g., 1.0) makes the probabilities more uniform and increases randomness, resulting in more varied and diverse text. On the other hand, a lower temperature value (e.g., 0.2) makes the probabilities more concentrated and biased towards the most likely word, resulting in more deterministic and focused text.
 
For example, with a higher temperature setting, the model may generate sentences like:
: ''"The weather is hot and sunny. I love to go swimming or hiking."''
 
With a lower temperature setting, the model may generate sentences like:
: ''"The weather is hot. I love to go swimming."''
 
The choice of temperature parameter depends on the desired output. Higher values are useful when you want more creativity and diversity in the generated text, but it may lead to less coherent or nonsensical sentences. Lower values are useful when you want more deterministic and focused text, but it may result in repetitive or overly conservative output. It's a hyperparameter that can be tuned to achieve the desired balance between randomness and coherence in the generated text.
 
===== TopP =====
TopP, also known as "nucleus sampling" or "stochastic decoding with dynamic vocabulary," is a text generation technique that is used to improve the diversity and randomness of generated text. It is often used as an alternative to traditional approaches like random sampling or greedy decoding in language models, such as GPT-2 and GPT-3.
 
In TopP sampling, instead of sampling from the entire probability distribution of possible next words or tokens, the model narrows down the choices to a subset of the most likely options. The subset is determined dynamically based on a predefined probability threshold, denoted as "p". The model considers only the words or tokens whose cumulative probability mass (probability of occurrence) falls within the top "p" value. The remaining words or tokens with lower probabilities are pruned from the selection.
 
Mathematically, given a probability distribution over all possible words or tokens, TopP sampling works as follows:
 
# Compute the cumulative distribution function (CDF) of the probabilities for the given distribution.
# Sort the probabilities in descending order and calculate the cumulative sum of probabilities from highest to lowest.
# Stop when the cumulative sum exceeds the threshold "p". So 0.1 means only the tokens comprising the top 10% probability mass are considered.
# The remaining set of words or tokens whose probabilities fall within the threshold "p" is considered for sampling.
By using TopP sampling, the model can generate text that is more diverse, as it allows for the possibility of selecting less frequent or rarer words or tokens, and it introduces randomness in the selection process. It can prevent the model from becoming overly deterministic or repetitive in its generated output, leading to more creative and varied text generation results.
 
===== Presence Penalty =====
The "presence penalty" is a technique used in text generation to encourage the model to generate more concise and focused outputs by penalizing the repetition of the same words or tokens in the generated text. It is a regularization technique that aims to reduce redundancy and promote diversity in the generated output.
 
The presence penalty is typically implemented as an additional term in the loss function during the training process of a language model. This term penalizes the model for generating the same words or tokens multiple times within a short span of text. The presence penalty can be formulated in different ways, depending on the specific model architecture and objectives, but the general idea is to assign a higher loss or penalty when the model generates repetitive or redundant text.
 
The presence penalty encourages the model to generate text that is more concise, avoids repetitive patterns, and promotes the use of a wider vocabulary. It helps prevent the model from generating overly verbose or redundant text, which can be undesirable in certain text generation tasks, such as story generation or summarization.
 
The magnitude of the presence penalty can be tuned to control the level of repetition allowed in the generated text. A higher penalty value would result in stricter avoidance of repetition, while a lower penalty value would allow for more repetition. The presence penalty is one of the techniques that can be used in combination with other regularization methods, such as temperature scaling, top-k sampling, or fine-tuning, to improve the quality and diversity of generated text.
 
===== Frequency Penalty =====
Frequency-based regularization techniques in text generation can refer to methods that aim to control the distribution of word or token frequencies in the generated text. This can be achieved by adding penalties or constraints to the model during training, such as limiting the occurrence of certain words or tokens, promoting the use of less frequent words or tokens, or controlling the balance of word or token frequencies in the generated text.


==== Timeout ====
==== Timeout ====
Line 74: Line 39:


==== Preprocessing ====
==== Preprocessing ====
To put simply, these tools were provided to allow the insertion (or deletion) of control characters to give textual context to information that would otherwise be spatial. GPT does not have an awareness of the location of text you feed it. As a person you can look at a table of information and understand it visually. GPT cannot. However, if you were to have control characters like tabs or paragraph markings, it increases the chance that GPT might understand those things.
Please visit the [[Text Preprocessor]] article for more information.


==== Overflow Disposition ====
==== Overflow Disposition ====

Latest revision as of 16:42, 27 August 2025


This article is about an older version of Grooper.

Information may be out of date and UI elements may have changed.

20252023

GPT Complete is a Value Extractor that leverages Open AI's GPT models to generate chat completions for inputs, returning one hit for each result choice provided by the model's response.

PLEASE NOTE: GPT Complete is a deprecated Value Extractor. It uses an outdated method to call the OpenAI API. Please use the Ask AI extractor going forward.

You may download the ZIP(s) below and upload it into your own Grooper environment (version 2023). The first contains one or more Batches of sample documents. The second contains one or more Projects with resources used in examples throughout this article.

About

GPT Complete is a Value Extractor that was added to Grooper 2023. It is the setting you choose to leverage GPT integration on an extractor. Below are some examples of configuration and use. You should be able to follow along using the GPT Integration zip files (Batch and Project are included) that are included in this article. Begin by following along with the instructions. The details of the properties will be explained after.

It is also worth noting that the examples given below ARE NOT a comprehensive list. Provided are only a few examples of prompts used in extraction to get you thinking about what can be done. It is highly recommended that you not only reference the materials linked above, but also spend time experimenting and testing. Good luck!

Extractor Properties

Before moving on to seeing how the GPT model is used for classification in Grooper let's take a look at the properties used in the GPT Complete extractor.

API Key

You must fill this property with a valid API key from OpenAI in order to leverage GPT integration with Grooper. See the Obtain an API Key section of the GPT Integration article for instruction on how to get a key.

Model

The API Key you use will determine which GPT models are available to you. The different GPT models can affect the text generated based on their size, training data, capabilities, prompt engineering, and fine-tuning potential. GPT-3's larger size and training data, in particular, can potentially result in more sophisticated, diverse, and contextually appropriate text compared to GPT-2. However, the actual performance and quality of the generated text also depend on various other factors, such as prompt engineering, input provided, and specific use case requirements. GPT-4 is the latest version, as of this writing, and takes the GPT model even further.

Parameters

Please see the Parameters article for more information.

Timeout

The amount of time, in seconds, to wait for a response from the web service before raising a timeout error.

Instructions

The instructions or question to include in the prompt. The prompt sent to OpenAI consists of text content from the document, which provides context, plus the text entered here. This property should ask a question about the content or provide instructions for generating output. For example, "what is the effective date?", "summarize this document", or "Your task is to generate a comma-separated list of assignors".

Preprocessing

Please visit the Text Preprocessor article for more information.

Overflow Disposition

Specifies the behavior when the document content is longer than the context length of the selected model.

May be one of the following:

  • Truncate - The content will be truncated to fit the model's context length.
  • Split - The content will be split into chunks which fit the model's context length. One result will be returned for each chunk.

Context Extractor

An optional extractor which filters the document content included in the prompt. All Value Extractor types are available.

Max Response Length

The maximum length of the output, in tokens. 1 token is equivalent to approximately 4 characters for English text. Increasing this value decreases the maximum size of the context.

Maximum Content Length

The maximum amount of content from the document to be included, in tokens.

How To

  1. After importing the Grooper ZIP files provided with this course, expand the Node Tree out and select the Data Field named "Lessor".
  2. Click the drop-down menu for the Value Extractor property.
  3. Select the GPT Complete option from the menu.


  1. With the Value Extractor property set, click the ellipsis button to open its configuration window (if you prefer, you can instead click the drop-down arrow to the left of the property to edit its properties without a pop-up window).


  1. Start by entering your API key into the API Key property.
  2. Click the "Browse Batches" button.
  3. Select "GPT Complete Examples" Batch in the "GPT Integration - Batches" folder from the menu.


  1. Select "Lease (1)" from the Batch Viewer.
  2. Click the ellipsis button for the Instructions property to open its configuration window (if you prefer, you can instead simply type into the entry field of the property.)


  1. Type the string value Who is the lessor? into the editor.
  2. Click the "OK" button to accept and close this window.


  1. When the previous window closes the extractor will immediately fire (assuming you have automatic testing enabled), and you will see a result returned in the "Results" list view.

From a "prompt engineering" perspective the input we gave it is as basic as you can get. A result is returned, which is great, but it may not be the exact result that is desired. The value supplied is very conversational, which isn't necessarily a bad thing and is typical of an AI that's trained to emulate language, but considering how data is typically constructed in Grooper, it's not quite right. If you break it down, the result given is really four values: the lessor's name, their marital status, their gender, and their location. In this case the name of the lessor only will suffice.

The next thing to tackle will be using some prompt engineering to get a more specific result.

Getting a More Specific Result with Prompt Engineering

  1. Working with the same material as before, select the Data Field named "Lessee".
  2. Click the drop-down menu for the Value Extractor property.
  3. Select GPT Complete from the drop-down menu.


  1. With the Value Extractor set, click the ellipsis button to open its configuration window (if you prefer, you can instead click the drop-down arrow to the left of the property to edit its properties without a pop-up window).


  1. Start by entering your API key into the API Key property.
  2. Make sure "Lease (1)" is still selected in the Batch Viewer.
  3. Click the ellipsis button for the Instructions property to open its configuration window (if you prefer, you can instead simply type into the entry field of the property.)


  1. Type the string value Who is the lessee? into the editor.
  2. Click the "OK" button to accept and close this window.


  1. When the previous window closes the extractor will immediately fire (assuming you have automatic testing enabled), and you will see a result returned in the "Results" list view.
    • This is clearly a different result form the "Lessor", which is good, but let's address the issue mentioned previously. Let's use some simple "prompt engineer" to get the specific result desired.
  2. Click the ellipsis button for the Instructions property to open its configuration window (if you prefer, you can instead simply type into the entry field of the property.)


  1. Add to the string value Respond only with the lessee's name.
  2. Click the "OK" button to accept and close this window.


  1. This is a much better result than before. However, the period at the end is unnecessary and can be removed, again, by prompting the AI appropriately.
  2. Click the ellipsis button for the Instructions property to open its configuration window (if you prefer, you can instead simply type into the entry field of the property.)


  1. Add to the string value Don't include control characters.
  2. Click the "OK" button to accept and close this window.


  1. Perfect! This is the exact value needed.

This is by no means anything but a simple prompt, but notice how giving context and being more specific alters the result. As a user learning this new technology, it's now time to start experimenting with your prompts and getting creative to get the results you're looking for.

Example: Full and Brief Document Summary

  1. Working with the same material as before, select the Data Field named "Full Summary".
  2. Click the drop-down menu for the Value Extractor property.
  3. Select GPT Complete from the drop-down menu.


  1. With the Value Extractor set, click the ellipsis button to open its configuration window (if you prefer, you can instead click the drop-down arrow to the left of the property to edit its properties without a pop-up window).


  1. Start by entering your API key into the API Key property.
  2. Type tldr into the Instructions property.
  3. Assuming you have automatic testing enabled, you will see a result returned in the "Results" list view. Click this result.
  4. Click the "Inspect" button.


  1. In the "Data Inspector" you will see the number of characters in the result.
  2. You will also see the full text of the summary.
  3. Right-click in a blank space to get a list of commands.
  4. Make sure "Text Wrap" is enabled so that the text will wrap like it is in the screenshot.


  1. After confirming the previous settings and closing windows, right-click the "Full Summary" Data Field to get a list of commands.
  2. Select the "Clone..." command.


  1. Name the clone "Brief Summary".
  2. Confirm the clone by clicking the "Execute" button.


  1. With the clone made, click the ellipsis button of the Value Extractor property to open its configuration window (if you prefer, you can instead click the drop-down arrow to the left of the property to edit its properties without a pop-up window).


  1. Add in 100 words or less to the Instructions property.
  2. A result will be returned in the "Results" list view. Select this result.
  3. Click the "Inspect" button.


  1. In the "Data Inspector" you will now notice this result's length is much shorter.
  2. The summary given is much shorter than the previous due to the additional instruction given in the prompt.

Example: Sentiment Analysis

  1. Working with the same material as before, select the Data Field named "Sentiment Analysis".
  2. Click the drop-down menu for the Value Extractor property.
  3. Select GPT Complete from the drop-down menu.


  1. With the Value Extractor set, click the ellipsis button to open its configuration window (if you prefer, you can instead click the drop-down arrow to the left of the property to edit its properties without a pop-up window).


  1. Start by entering your API key into the API Key property.
  2. Click the ellipsis button for the Instructions property to open its configuration window (if you prefer, you can instead simply type into the entry field of the property.)


  1. Type the string Is this document's sentiment positive, negative, or neutral? Respond with only the sentiment and no control characters. into the editor.
  2. Click the "OK" button to accept and close this window.


  1. When the previous window closes, click on "Document (4)" in the Batch Viewer.
  2. Assuming you have automatic testing enabled, you will see a result ("negative") returned in the "Results" list view.


  1. Click on "Document (5)" in the Batch Viewer.
  2. Assuming you have automatic testing enabled, you will see a result ("positive") returned in the "Results" list view.