OCR (Concept)

From Grooper Wiki
Revision as of 12:18, 11 February 2020 by Dgreenwood (talk | contribs)

OCR stands for Optical Character Recognition. It allows text from paper documents to be digitized to be searched or edited by other software applications. OCR converts typed or printed text from digital images of physical documents into machine readable, encoded text. This conversion allows Grooper to search text characters from the image, providing the capability to separate images into documents, classify them and extract data from them.

About

The quick explanation of OCR is it analyzes pixels on an image and translates those pixels into text. Most importantly, it translates pixels into machine readable text. Grooper can be described as a document modeling platform. You use the platform to model how pages are separated out into documents, how one document gets put into one category or another, and how extractable data is structured on the document. Once you have this model of what a document is, how it fits into a larger document set, and where the data is on it, you can use it to programmatically process any document that fits the model.

In order to do any of that, you have to be able to read the text on the page. How do you know an invoice is an invoice? A simple way could be locating the word "invoice" (or other text associated with the invoice). You, as a human, do this by looking at the ink on a page (or pixels for a digital document) and reading the word "invoice". Grooper does this by using a Data Extractor (and regular expression) to read the machine readable text for the page. OCR is how each page gets that machine readable text in order to model the document set and process it.

OCR vs. Native Text

OCR gets text specifically from images, whether they were printed and scanned or imported from a digital source. However, what if the document created digitally and imported in its original digital form? Wouldn't it have been created on a computer, using machine readable text? Most likely, yes! If a form was created using a product like Adobe Acrobat and filled in using a computer, the text comprising the document and the filled fields is encoded within the document itself. This is called "Native Text". This text is already machine readable. So there is no reason to OCR the document. Instead, the native text is extracted via Grooper's native text extraction. Native text has a number of advantages over OCR. OCR is not perfect. As you will see, OCR is a fairly complicated process with a number of opportunities to misread a document. Grooper has plenty of advancements to get around these errors and produce a better result, but OCR will rarely be as accurate as the original native text from a digital document.

However, be careful. Just because a PDF document has machine readable text behind it, does not mean that text is native text. If the document was OCR'd by a different platform, the text may have been inserted into the PDF (Grooper also has this capability upon exporting document). In these cases, we still recommend OCR'ing the document to take advantage of Grooper's superior OCR capabilities and get a more accurate result.

Regardless whether getting machine readable text through OCR or Native Text Extraction, both are done via the Recognize activity. In the case of OCR, you will need to create an OCR Profile containing all the settings to perform OCR and reference it during the Recognize activity. Native Text Extraction is enabled by default, but can be disabled if you wish to use OCR instead.

The General Process

The general process of OCR'ing a document is as follows in Grooper:

1) The document image is handed to the Recognize activity, which references an OCR Profile, containing the settings to perform the OCR operation.

2) The OCR Engine (set on the OCR Profile) converts the pixels on the image into machine readable text for the full page.

3) Grooper reprocesses the OCR Engine's results and runs additional OCR passes using the OCR Profile's Synthesis properties.

4) The raw OCR results from the OCR Engine and Grooper's Synthesis results are combined into a single text flow.

5) Undesirable results can be filtered out using Grooper's Results Filtering options.

What is an OCR Engine?

OCR Engines are software applications that perform the actual recognition of characters on images, analyzing the pixels on the image and figuring out what text characters they match.

OCR Engines themselves have three phases:

Pre-Processing

First and foremost, OCR applications require a black and white image in order to determine what pixels on a page are text. So, color and grayscale images must be converted to black and white. This is done by a process called "thresholding" which determines a middle point between light pixels and dark pixels on the page. Lighter pixels are then turned into white and darker ones are turned into black pixels. You are left with only black and white pixels, with (ideally) all text in black and everything else faded into a white background.

The original scanned image... ...is turned black and white to divide the page into black pixels (text) and white pixels (the background).

Some OCR Engines also contain de-skewing, despeckling, line removal, aspect ratio normalization, or other pre-processing functions to improve OCR results.

FYI Grooper has it's own pre-processing capabilities through its Image Processing operations. OCR Engines typically place these pre-processing functions in a "black box" for users. At best, the OCR Engine may allow you to turn the property "on" or "off" but may not allow you to configure it further to fine tune its results. Custom Image Processing can be performed using IP Profiles made of highly configurable IP Commands.

One of the most important aspects of pre-processing is "segmentation". This is the process of breaking up a page into first lines, then words, and, finally, individual characters.

In general, this process involves distinguishing between text and the white space between text. Lines of text are distinguished by the horizontal space between one line and another. This can be seen using a histogram projection profile.

The gray peaks on the left side of the image show the amount of black pixels on the page. The larger the peak, the larger the number of black pixels on that line. OCR "sees" the line break where there are gaps are between those collections of pixels.

Words can be broken up in a similar way. One expects a small amount of space between characters. How we tell the difference between "rn" and "m", after all, is just that tiny amount of space between the "r" and "n". Between words, however, that space should be a bit larger. So, words are segmented at points where there are larger than normal amounts of white space between characters on a line.

In a perfect world, characters would be segmented out at this point as well. After all, there is still some space between each character, just a little smaller than between each word. You can easily see this with fixed-pitched fonts. Fixed pitched fonts


However, the world of printed text is rarely that perfect.

Looking at the image below, there is no white space between the "a" and "z" or "z" and "e" in "Hazel". Just looking at the histogram projection, there's no break in the pixels to define where one character stops and another begins. There's a slight break in the "n" in "Daniels". So, there is some white space in the middle of the character where there shouldn't be. But, that shouldn't mean those are two separate characters.

If the characters were separated out using the normal segmenting we've seen previously, we might expect a very poor result. However, ultimately, we get the result we expect, "Hazel Daniels".

Modern OCR Engines perform more sophisticated character level segmenting than just looking for small gaps between characters. Characters connected by small artifacts can be isolated from each other and characters that are broken apart can be linked together. This is done both by analyzing the peaks and valleys of pixel densities to determine where a gap "should" be as well as further segmenting the word to look at the context of portions of a character before and after to make a decision as to where one character starts and stops.

Once the OCR Engine has segmented the entire image into individual character segments, it can use character recognition to determine what character corresponds to that segment. However, this is where a lot of OCR errors can crop up. Depending on the quality of the image or the original document, characters can be joined and disconnected in many different ways. There's no guarantee the OCR Engine will perfectly separate out one segment of a word as the "right" character.

Character Recognition

There are two basic types of recognition algorithms: matrix matching and feature extraction.

Matrix matching compares a NxN matrix of pixels on a page to a library of stored character glyph examples. This is also known as "pattern recognition" or "image correlation".

The character on the document's image... ...is compared to a stored example... ...by comparing a matrix of pixels, between the character on the image and the stored example.

The OCR Engine then makes a decision about what character that matrix of pixels is. In this case, a "G". Some kind of confidence or similarity score is also assigned. The example above is pretty similar to the stored glyph. It would score something like 99%. Matrix matching does have some pitfalls, however. Because it is comparing text to the stored glyph pixel by pixel, the text needs to be similar to the stored glyph's font and scale in order to match. While there may be hundreds of example gylphs of various fonts and scales for a single character, this can cause problems when matching text on poor quality images or using uncommon fonts.

The second type of recognition algorithm, feature extraction, decomposes characters into their component "features" like lines, line direction, line intersections or closed loops. These features are compared to vector-like representations of a character, rather than pixel representations of the character.

Instead of pixels... ...features matching how the character is drawn... ...are compared to how those features are used to draw stored glyphs.


  • Post-Processing