2023:Shape Removal (IP Command): Difference between revisions
| Line 8: | Line 8: | ||
Removing a shape from a document may be helpful if it is interfering with another Grooper activity, such as getting OCR text from the Recognize activity. | Removing a shape from a document may be helpful if it is interfering with another Grooper activity, such as getting OCR text from the Recognize activity. | ||
== Use Cases == | == Use Cases == | ||
Revision as of 14:25, 21 December 2023

Shape Removal detects and removes shapes from documents.
The Shape Removal command builds upon the existing Shape Detection command. Shape Detection finds shapes in a document (such as logos) from a set of sample images given by the user. Shape Removal goes one step further and removes the detected shape from the document. The removed pixels are either filled in with a solid color or inpainted to try and match it to pixels nearby.
Removing a shape from a document may be helpful if it is interfering with another Grooper activity, such as getting OCR text from the Recognize activity.
Use Cases
The primary use for the Shape Removal command is to improve a document's readability. Often, images on a page can interfere with OCR results from the Recognize activity. If you can give Shape Removal sample images of what to look for, it can remove those images from a document set. Logos, for example, are often removed by Shape Removal. As well as removing logos to improve OCR results (which can be done without removing them from the final exported documents), Shape Removal could also be used to permanently de-brand the exported documents.
| Original Image | Logo detected and removed |
How To: Add Shape Removal to an IP Profile
Before you begin
You must have a Test Batch ready with examples of the shape on the document. Part of configuring the Shape Removal command is collecting sample images of the shape to be removed. This guide assumes you’ve already created an IP Profile.
Account for alternate image scale and angle

| ! | Any adjustments you make to these properties will increase this command's compute time. If you expect a drastic change in size or angle (i.e. half or double the sample image's size or rotated a full 90 degrees), it may be appropriate to just add a second sample image. This will significantly reduce the time it takes to detect the image. |
Preprocessing options

- The "Processing Resolution" property controls the resolution at which the sample image is compared to a document. The default resolution used is fairly low (50 dpi). This is partly a processing time cutting measure. In general, shape detection doesn't need a perfect match. Lowering the sample image's resolution lowers the time it takes to scan for it on a document. It just needs to be close enough that it catches a match. Using a lower resolution can also help match shapes that are not a "one to one" reproduction of the sample image (as in if they are degraded somewhat on the document). In a way, it makes the sample a "fuzzy" match. However, if you lower the resolution too much, it could start matching shapes you don't want. And, using a higher resolution will make matching tighter.
- "Binarization" can turn color or grayscale samples into black and white. Searching for a flat black and white shape on a (also binarized) black and white document may end up giving you more accurate results. For more information on binarization, visit the Binarize article.
- "Dilation Factor" will bloat the edges of the sample image. This is another way of getting a "fuzzy" match from the sample. It will increase the range of pixels possible to produce a match along a shape's edge.
- If you know the physical location (give or take) the shape will be on a document, you can limit where Grooper looks for it using the "Region of Interest" property. Select it and press the ellipsis button at the end.

A brief aside about masks


As you can see, we have a problem. While the Shape Mask looks like a slightly blurry silloute of our sample image (we'll talk about why it's blurry later), the Dropout Mask does not contain the three scales underneath the "G". Only pixels from the dropout mask are removed, giving us a poor result.
| Shape Mask
|
Dropout Mask
![]() |
Result
|
Binarize the document

Binarization converts color images to black and white by "thresholding" the image. Thresholding is the process of setting a threshold value on the pixel intensity of the original image. Pixel intensity is a pixel's "lightness" or "brightness". Essentially, once a midpoint between the most intense ("whitest") and least intense ("blackest") pixel on a page is established, lighter pixels are converted to white and darker are converted to black. Or put another way, pixels with an intensity value above the threshold are converted to white, and those below the threshold are converted to black. This midpoint (or "threshold") can be set manually or found automatically by a software application. The Thresholding Method can be set to one of four ways:
- Simple - Thresholds an image to black and white using a fixed threshold value between 1 and 255.
- Auto - Selects a threshold value automatically using Otsu's Method.
- Adaptive - Thesholds pixels based on the intensity of pixels in the local neighborhood.
- Dynamic - Performs adaptive thresholding, while preserving dark areas on the page.
| Threshold 140 |
Threshold 200 |
You may be concerned the text quality is affected by increasing the threshold that high. Remember, we are only temporarily binarizing the image for the purpose of dropping out the shape. It will have no effect on how the text is read via OCR (unless text is removed as part of the shape).
Remove the shape's pixels
Once the image binarized and a shape is detected, a dropout mask is created and pixel locations from the binarized image matching pixel locations from the shape mask are removed. They aren't physically erased however. Rather, they are colored in using one of two "Dropout Methods". This can be either "Fill" or "Inpaint"
Dropout Method: Fill
Fill is the most common method. By default, this will replace pixels in the original image with a color matching the image's background. Alternatively, you can pick which color to fill the dropped out pixels. In the example we've been using, the background color was identified as a shade of gray seen in the Output Image bellow.
You can change what color fills the dropout mask using the "Fill Color" property. Expand the "Dropout Method" by double clicking it and select "Fill Color". If it is blank, it is using the background color.
You can select a new color by expanding the dropdown menu and using the "Custom" "Web" and "System" tabs.
Setting the color to "White", we get a result closer to what we want.
There is still a faint outline of the logo. This is because those pixels were turned white during binarization and therefore not included in the dropout mask. We will resolve this issue using the "Mask Dilation Factor".
This property expands the dropout mask to increase the region of pixels to fill.
| No Mask Dilation Factor | Mask Dilation Factor of "6". No more logo! |
![]() |
|
Dropout Method: Inpaint
Inpaint fills the dropout mask using color information from pixels around the removed pixels. This method is designed to match removed pixels to a colored or complex background. Student transcripts are a great example. They often are printed on paper with some kind of patterned background. For our example, the result looks odd because the document has just a white background, but it should demonstrate what is happening.
The Inpaint method also has two different methods of filling pixels: "Telea" and "NavierStokes". "Telea" restores pixels by approximating the value of the removed pixels based on the value of pixels around it. More or less, if 75% of the pixels around it are white and 25% of the pixels around it are black, the pixel would become white. The area of known pixels is called a "neighborhood". You probably think about housing demographics the same way. Let's say for every house on a block you know their household income but one. 75% of them fall into an "upper class" income bracket. 25% fall into "upper middle class". While that one house's income level could be upper middle class (or even lower), given most of the houses on the block are upper class, it's safer to assume it is upper class as well.
"NavierStokes" uses equations from fluid dynamics to fill in pixels the same way a fluid would fill a void. Imagine pixel colors bleeding into the empty space the same way a liquid would fill a gap. If you had a grey colored liquid and a black colored liquid filling in a gap, they would compete to fill the space in certain ways. If there's less of the black liquid than grey around the gap, ultimately more of the gap will be filled by grey liquid. Furthermore, the black liquid will pool in the gap closer to concentrations of black liquid around the gap. Filling in pixels works much the same way. First, if there's more grey pixels around the empty space, more of that void is going to be filled by grey pixels. Second, if a black pixel is right next to the empty space, at least part of that space should be filled by black pixels.
You can also control the "Inpaint Radius". This property specifies how large the area around the dropped out pixels Grooper is "looking at" to get a picture of how to fill it in. In other words, how big the neighborhood is. You can really see the difference between "Telea" and "NavierStokes" when configuring this property. "Telea" is looking at the weighted sum of of pixels in the neighborhood around empty pixels to color them. Increasing the Inpaint Radius is going to increase the size of the neighborhood around the pixel to be filled. If we increase the Inpaint Radius to "25px", that much larger radius is going to include more white pixels. So, we would expect to see at least a lighter image. However, since "NavierStokes" uses fluid dynamics, this "whiting out" is much less pronounced. With the radius being larger, there's more "fluid" to draw from. But at some point, the void of pixels is filled and the "flow" of pixels into the void should stop.
| 3px Inpaint Radius | 25px Inpaint Radius | |
| Telea | ![]() |
|
| NavierStokes | ![]() |
|
Keep in mind for our example, "Fill" worked just fine. "Inpaint" is more suited to match the removed area to a more complicated background than just white.
Dilation Factor vs. Dilation Factor vs. Mask Dilation Factor
You may have noticed, we skipped over one Shape Removal property, "Dilation Factor". You may have also noticed this term has popped up a lot. There is a "Dilation Factor" property in "Detection Settings". There is a "Dilation Factor" property in the main "Shape Removal" property panel. There is a "Mask Dilation Factor" sub-property under the "Dropout Method" property.
The "Dilation Factor" property in the main Shape Removal property panel, dilates the Shape Mask. This is set to "4" by default. This is why the shape looks bloated when you look at the "Shape Mask" diagnostic image. It is dilated by default in an attempt to account for variations from the sample image and the image trying to be removed. Unlike other dilation factors, this can only be altered to positively dilate the image. The shape mask cannot be eroded in other words.
Refer to below for differences on the various dilation factors within "Shape Removal"
Property Details
There are four configurable properties available to Shape Removal: Detection Settings, Binarization, Dilation Factor, and Dropout Method. Some of these have substantial subproperies available to them. They are all detailed below.
Detection Settings Details
The properties located in "Detection Settings" are used to set sample images to detect on documents and configure how and where they are detected. Pressing the ellipsis button at the end of the property will bring up a new window with the properties listed below.
Binarization Details
Binarization converts color images to black and white by "thresholding" the image. Once a sample shape is found on a document, the document is binarized in order to target the pixels to be removed. Thresholding is the process of setting a threshold value on the pixel intensity of the original image. Pixel intensity is a pixel's "lightness" or "brightness". Essentially, once a midpoint between the most intense ("whitest") and least intense ("blackest") pixel on a page is established, lighter pixels are converted to white and darker are converted to black. Or put another way, pixels with an intensity value above the threshold are converted to white, and those below the threshold are converted to black. This midpoint (or "threshold") can be set manually or found automatically by a software application. The Thresholding Method can be set to one of four ways:- Simple - Thresholds an image to black and white using a fixed threshold value between 1 and 255.
- Auto - Selects a threshold value automatically using Otsu's Method.
- Adaptive - Thesholds pixels based on the intensity of pixels in the local neighborhood.
- Dynamic - Performs adaptive thresholding, while preserving dark areas on the page.

Dilation Factor Details
Dilation Factor here (in the main Shape Removal property panel) controls how dilated the Shape Mask is. The Shape Mask is overlaid on a binarized document after one of the sample shapes was detected. All pixels falling under the Shape Mask will be dropped out. Dilating the mask widens the sample image, adding a pixel border around it, effectively expanding its edges. Since all pixels underneath the Shape Mask will be removed, dilating it can account for small variations between the sample image and the image being removed. The objective is to bloat the Shape Mask enough to intersect these small variations, but not too much to intersect other meaningful features on the page, such as text. Only positive numbers are allowed here. Meaning the Shape Mask can only be dilated, not eroded.
Dropout Method Details
This property determines how pixels targeted for removal during the dropout operation will be "removed". They are not removed in that they are deleted. They are removed in that they are colored in to match the image's background. This can be set to "Fill" or "Inpaint".
"Inpaint" fills the dropout mask using color information from pixels around the removed pixels. This method is designed to match removed pixels to a colored or complex background. Student transcripts are a great example. They often are printed on paper with some kind of patterned background. There are two "Inpaint Method" options: Telea and NavierStokes.
- "Telea" restores pixels by approximating the value of the removed pixels based on the average value of pixels around it. If 75% of the pixels around it are white and 25% of the pixels around it are black, the pixel would become white. The area of known pixels used to find this color average is called a "neighborhood".
- "NavierStokes" uses equations from fluid dynamics to fill in pixels the same way a fluid would fill a void.







































