OCR (Concept)
OCR stands for Optical Character Recognition. It allows text from paper documents to be digitized to be searched or edited by other software applications. OCR converts typed or printed text from digital images of physical documents into machine readable, encoded text. This conversion allows Grooper to search text characters from the image, providing the capability to separate images into documents, classify them and extract data from them.
About
The quick explanation of OCR is it analyzes pixels on an image and translates those pixels into text. Most importantly, it translates pixels into machine readable text. Grooper can be described as a document modeling platform. You use the platform to model how pages are separated out into documents, how one document gets put into one category or another, and how extractable data is structured on the document. Once you have this model of what a document is, how it fits into a larger document set, and where the data is on it, you can use it to programmatically process any document that fits the model.
In order to do any of that, you have to be able to read the text on the page. How do you know an invoice is an invoice? A simple way could be locating the word "invoice" (or other text associated with the invoice). You, as a human, do this by looking at the ink on a page (or pixels for a digital document) and reading the word "invoice". Grooper does this by using a Data Extractor (and regular expression) to read the machine readable text for the page. OCR is how each page gets that machine readable text in order to model the document set and process it.
The General Process
In Grooper, OCR is performed by the Recognize activity, referencing an OCR Profile which contains all the settings to get the OCR results, including which OCR Engine is used. The OCR Profile also has settings to optionally process those results to increase the accuracy of the OCR Engine used. The general process of OCR'ing a document is as follows in Grooper:
1) The document image is handed to the Recognize activity, which references an OCR Profile, containing the settings to perform the OCR operation.
2) The OCR Engine (set on the OCR Profile) converts the pixels on the image into machine readable text for the full page.
3) Grooper reprocesses the OCR Engine's results and runs additional OCR passes using the OCR Profile's Synthesis properties.
4) The raw OCR results from the OCR Engine and Grooper's Synthesis results are combined into a single text flow.
5) Undesirable results can be filtered out using Grooper's Results Filtering options.
OCR vs. Native Text
OCR gets text specifically from images, whether they were printed and scanned or imported from a digital source. However, what if the document created digitally and imported in its original digital form? Wouldn't it have been created on a computer, using machine readable text? Most likely, yes! If a form was created using a product like Adobe Acrobat and filled in using a computer, the text comprising the document and the filled fields is encoded within the document itself. This is called "Native Text". This text is already machine readable. So there is no reason to OCR the document. Instead, the native text is extracted via Grooper's native text extraction. Native text has a number of advantages over OCR. OCR is not perfect. As you will see, OCR is a fairly complicated process with a number of opportunities to misread a document. Grooper has plenty of advancements to get around these errors and produce a better result, but OCR will rarely be as accurate as the original native text from a digital document.
However, be careful. Just because a PDF document has machine readable text behind it, does not mean that text is native text. If the document was OCR'd by a different platform, the text may have been inserted into the PDF (Grooper also has this capability upon exporting document). In these cases, we still recommend OCR'ing the document to take advantage of Grooper's superior OCR capabilities and get a more accurate result.
Regardless whether getting machine readable text through OCR or Native Text Extraction, both are done via the Recognize activity. In the case of OCR, you will need to create an OCR Profile containing all the settings to perform OCR and reference it during the Recognize activity. Native Text Extraction is enabled by default, but can be disabled if you wish to use OCR instead.
What is an OCR Engine?
OCR Engines are software applications that perform the actual recognition of characters on images, analyzing the pixels on the image and figuring out what text characters they match.
OCR Engines themselves have three phases:
Pre-Processing
First and foremost, OCR applications require a black and white image in order to determine what pixels on a page are text. So, color and grayscale images must be converted to black and white. This is done by a process called "thresholding" which determines a middle point between light pixels and dark pixels on the page. Lighter pixels are then turned into white and darker ones are turned into black pixels. You are left with only black and white pixels, with (ideally) all text in black and everything else faded into a white background.
| The original scanned image... | ...is turned black and white to divide the page into black pixels (text) and white pixels (the background). |
Some OCR Engines also contain de-skewing, despeckling, line removal, aspect ratio normalization, or other pre-processing functions to improve OCR results.
| FYI | Grooper has it's own pre-processing capabilities through its Image Processing operations. OCR Engines typically place these pre-processing functions in a "black box" for users. At best, the OCR Engine may allow you to turn the property "on" or "off" but may not allow you to configure it further to fine tune its results. Custom Image Processing can be performed using IP Profiles made of highly configurable IP Commands. |
One of the most important aspects of pre-processing is "segmentation". This is the process of breaking up a page into first lines, then words, and, finally, individual characters.
In general, this process involves distinguishing between text and the white space between text. Lines of text are distinguished by the horizontal space between one line and another. This can be seen using a histogram projection profile.

The gray peaks on the left side of the image show the amount of black pixels on the page. The larger the peak, the larger the number of black pixels on that line. OCR "sees" the line break where there are gaps are between those collections of pixels.
Words can be broken up in a similar way. One expects a small amount of space between characters. How we tell the difference between "rn" and "m", after all, is just that tiny amount of space between the "r" and "n". Between words, however, that space should be a bit larger. So, words are segmented at points where there are larger than normal amounts of white space between characters on a line.

In a perfect world, characters would be segmented out at this point as well. After all, there is still some space between each character, just a little smaller than between each word. You can easily see this with fixed-pitched fonts. Fixed pitched fonts
However, the world of printed text is rarely that perfect.
Looking at the image below, there is no white space between the "a" and "z" or "z" and "e" in "Hazel". Just looking at the histogram projection, there's no break in the pixels to define where one character stops and another begins. There's a slight break in the "n" in "Daniels". So, there is some white space in the middle of the character where there shouldn't be. But, that shouldn't mean those are two separate characters.

If the characters were separated out using the normal segmenting we've seen previously, we might expect a very poor result. However, ultimately, we get the result we expect, "Hazel Daniels".

Modern OCR Engines perform more sophisticated character level segmenting than just looking for small gaps between characters. Characters connected by small artifacts can be isolated from each other and characters that are broken apart can be linked together. This is done both by analyzing the peaks and valleys of pixel densities to determine where a gap "should" be as well as further segmenting the word to look at the context of portions of a character before and after to make a decision as to where one character starts and stops.
Once the OCR Engine has segmented the entire image into individual character segments, it can use character recognition to determine what character corresponds to that segment. However, this is where a lot of OCR errors can crop up. Depending on the quality of the image or the original document, characters can be joined and disconnected in many different ways. The OCR Engine may not perfectly separate out one segment of a word as the "right" character.
| FYI | Some amount of OCR errors are unavoidable. Document quality, scan quality, non-standard fonts and other issues can interfere with the OCR Engine producing 100% accurate results. Part of Grooper's job is to massage the OCR Engine's results, through Image Processing, OCR Synthesis and Fuzzy Matching, into more accurate ones. |
Character Recognition
Once the OCR Engine parses out the image into lines, and then words, and finally character segments, it must make a decision about what text character that character segment actually is. We're ready to do the "character recognition" part of "Optical Character Recognition".
There are two basic types of recognition algorithms: matrix matching and feature extraction.
Matrix matching compares a NxN matrix of pixels on a page to a library of stored character glyph examples. This is also known as "pattern recognition" or "image correlation".
| The character on the document's image... | ...is compared to a stored example... | ...by comparing a matrix of pixels, between the character on the image and the stored example. |
![]() |
![]() |
![]() |
|
The OCR Engine then makes a decision about what character that matrix of pixels is. In this case, a "G". Matrix matching does have some pitfalls, however. Because it is comparing text to the stored glyph pixel by pixel, the text needs to be similar to the stored glyph's font and scale in order to match. While there may be hundreds of example gylphs of various fonts and scales for a single character, this can cause problems when matching text on poor quality images or using uncommon fonts.
The second type of recognition algorithm, feature extraction, decomposes characters into their component "features" like lines, line direction, line intersections or closed loops. These features are compared to vector-like representations of a character, rather than pixel representations of the character.
| Instead of pixels... | ...features matching how the character is drawn... | ...are compared to how those features are used to draw stored glyphs. |
![]() |
![]() |
|
Just like with matrix matching, the OCR engine makes a decision about what character matches the extracted features. In this case, again, a "G". In engines using a combination of matrix matching and feature extraction, the results of both algoritms are combined to produce the best matching result. Each character is given a "confidence score", which corresponds to how closely the character segment's pixels matched either the stored glyph's matrix or features or combination of the two.
This presents another layer of potential errors. Given a document's quality, fonts used, and even the variable size of fonts on a page, the OCR Engine may recognize a character as the wrong glyph.
| What is this character? Is it a "G"? Is it a "C"? Is it a "0"? Is it an "O"? Is it just garbage? |
![]() |
The OCR Engine has to make a decision, which ultimately may not line up with what it is on the page. Especially in situations like this, where it may be difficult for even a human being to read the character, the OCR Engine will have a hard time recognizing the character.
| FYI | Some amount of OCR errors are unavoidable. Document quality, scan quality, non-standard fonts and other issues can interfere with the OCR Engine producing 100% accurate results. Part of Grooper's job is to massage the OCR Engine's results, through Image Processing, OCR Synthesis and Fuzzy Matching, into more accurate ones. |
Post-Processing
Without any context around a character, these OCR errors can make sense. The letter "a" and "o" can look fairly similar, especially using certain fonts. However, the word "ballboy" is a real word, and "bollboy" is utter nonsense.
| Similar characters may get misread by OCR... | ...which becomes obvious given the context of that character inside a word. |
![]() |
![]() |
The most common post-processing done by OCR Engines is basic spell correction. Often errors resulting poor character recognition result in small spelling mistakes. For many OCR Engines (all commercial OCR Engines), results are compared to a lexicon of common English words, and replacements are made if possible.
Note, this correction is only going to apply to words in the OCR Engine's lexicon. Proper nouns (unless in the lexicon) will not be corrected.
| FYI | When the OCR Engine is not capable of confidently making a spell correction, Grooper may be able to use Fuzzy Matching to get around OCR errors. |









