2023:Shape Removal (IP Command)

Shape Removal detects and removes shapes from documents.
The Shape Removal command builds upon the existing Shape Detection command. Shape Detection finds shapes in a document (such as logos) from a set of sample images given by the user. Shape Removal goes one step further and removes the detected shape from the document. The removed pixels are either filled in with a solid color or inpainted to try and match it to pixels nearby.
Removing a shape from a document may be helpful if it is interfering with another Grooper activity, such as getting OCR text from the Recognize activity.
Version Differences
Shape Removal is a new feature in version 2.80. Prior to 2.80 shapes could be detected via the Shape Detection command. However, Shape Detection was generally used for Visual document classification. Shape Removal would not have been possible in previous versions.
Use Cases
The primary use for the Shape Removal command is to improve a document's readability. Often, images on a page can interfere with OCR results from the Recognize activity. If you can give Shape Removal sample images of what to look for, it can remove those images from a document set. Logos, for example, are often removed by Shape Removal. As well as removing logos to improve OCR results (which can be done without removing them from the final exported documents), Shape Removal could also be used to permanently de-brand the exported documents.
| Original Image | Logo detected and removed |
How To: Add Shape Removal to an IP Profile
Before you begin
You must have a Test Batch ready with examples of the shape on the document. Part of configuring the Shape Removal command is collecting sample images of the shape to be removed. This guide assumes you’ve already created an IP Profile.
Account for alternate image scale and angle

| ! | Any adjustments you make to these properties will increase this command's compute time. If you expect a drastic change in size or angle (i.e. half or double the sample image's size or rotated a full 90 degrees), it may be appropriate to just add a second sample image. This will significantly reduce the time it takes to detect the image. |
Preprocessing options

- The "Processing Resolution" property controls the resolution at which the sample image is compared to a document. The default resolution used is fairly low (50 dpi). This is partly a processing time cutting measure. In general, shape detection doesn't need a perfect match. Lowering the sample image's resolution lowers the time it takes to scan for it on a document. It just needs to be close enough that it catches a match. Using a lower resolution can also help match shapes that are not a "one to one" reproduction of the sample image (as in if they are degraded somewhat on the document). In a way, it makes the sample a "fuzzy" match. However, if you lower the resolution too much, it could start matching shapes you don't want. And, using a higher resolution will make matching tighter.
- "Binarization" can turn color or grayscale samples into black and white. Searching for a flat black and white shape on a (also binarized) black and white document may end up giving you more accurate results. For more information on binarization, visit the Binarize article.
- "Dilation Factor" will bloat the edges of the sample image. This is another way of getting a "fuzzy" match from the sample. It will increase the range of pixels possible to produce a match along a shape's edge.
- If you know the physical location (give or take) the shape will be on a document, you can limit where Grooper looks for it using the "Region of Interest" property. Select it and press the ellipsis button at the end.

A brief aside about masks


As you can see, we have a problem. While the Shape Mask looks like a slightly blurry silloute of our sample image (we'll talk about why it's blurry later), the Dropout Mask does not contain the three scales underneath the "G". Only pixels from the dropout mask are removed, giving us a poor result.
| Shape Mask
|
Dropout Mask
![]() |
Result
|
Binarize the document

Binarization converts color images to black and white by "thresholding" the image. Thresholding is the process of setting a threshold value on the pixel intensity of the original image. Pixel intensity is a pixel's "lightness" or "brightness". Essentially, once a midpoint between the most intense ("whitest") and least intense ("blackest") pixel on a page is established, lighter pixels are converted to white and darker are converted to black. Or put another way, pixels with an intensity value above the threshold are converted to white, and those below the threshold are converted to black. This midpoint (or "threshold") can be set manually or found automatically by a software application. The Thresholding Method can be set to one of four ways:
- Simple - Thresholds an image to black and white using a fixed threshold value between 1 and 255.
- Auto - Selects a threshold value automatically using Otsu's Method.
- Adaptive - Thesholds pixels based on the intensity of pixels in the local neighborhood.
- Dynamic - Performs adaptive thresholding, while preserving dark areas on the page.
| Threshold 140 |
Threshold 200 |
You may be concerned the text quality is affected by increasing the threshold that high. Remember, we are only temporarily binarizing the image for the purpose of dropping out the shape. It will have no effect on how the text is read via OCR (unless text is removed as part of the shape).
Remove the shape's pixels
Once the image binarized and a shape is detected, a dropout mask is created and pixel locations from the binarized image matching pixel locations from the shape mask are removed. They aren't physically erased however. Rather, they are colored in using one of two "Dropout Methods". This can be either "Fill" or "Inpaint"
Dropout Method: Fill
Fill is the most common method. By default, this will replace pixels in the original image with a color matching the image's background. Alternatively, you can pick which color to fill the dropped out pixels. In the example we've been using, the background color was identified as a shade of gray seen in the Output Image bellow.
You can change what color fills the dropout mask using the "Fill Color" property. Expand the "Dropout Method" by double clicking it and select "Fill Color". If it is blank, it is using the background color.
You can select a new color by expanding the dropdown menu and using the "Custom" "Web" and "System" tabs.
Setting the color to "White", we get a result closer to what we want.
There is still a faint outline of the logo. This is because those pixels were turned white during binarization and therefore not included in the dropout mask. We will resolve this issue using the "Mask Dilation Factor".
This property expands the dropout mask to increase the region of pixels to fill.
| No Mask Dilation Factor | Mask Dilation Factor of "6". No more logo! |
![]() |
|
Dropout Method: Inpaint
Inpaint fills the dropout mask using color information from pixels around the removed pixels. This method is designed to match removed pixels to a colored or complex background. Student transcripts are a great example. They often are printed on paper with some kind of patterned background. For our example, the result looks odd because the document has just a white background, but it should demonstrate what is happening.
The Inpaint method also has two different methods of filling pixels: "Telea" and "NavierStokes". "Telea" restores pixels by approximating the value of the removed pixels based on the value of pixels around it. More or less, if 75% of the pixels around it are white and 25% of the pixels around it are black, the pixel would become white. The area of known pixels is called a "neighborhood". You probably think about housing demographics the same way. Let's say for every house on a block you know their household income but one. 75% of them fall into an "upper class" income bracket. 25% fall into "upper middle class". While that one house's income level could be upper middle class (or even lower), given most of the houses on the block are upper class, it's safer to assume it is upper class as well.
"NavierStokes" uses equations from fluid dynamics to fill in pixels the same way a fluid would fill a void. Imagine pixel colors bleeding into the empty space the same way a liquid would fill a gap. If you had a grey colored liquid and a black colored liquid filling in a gap, they would compete to fill the space in certain ways. If there's less of the black liquid than grey around the gap, ultimately more of the gap will be filled by grey liquid. Furthermore, the black liquid will pool in the gap closer to concentrations of black liquid around the gap. Filling in pixels works much the same way. First, if there's more grey pixels around the empty space, more of that void is going to be filled by grey pixels. Second, if a black pixel is right next to the empty space, at least part of that space should be filled by black pixels.
You can also control the "Inpaint Radius". This property specifies how large the area around the dropped out pixels Grooper is "looking at" to get a picture of how to fill it in. In other words, how big the neighborhood is. You can really see the difference between "Telea" and "NavierStokes" when configuring this property. "Telea" is looking at the weighted sum of of pixels in the neighborhood around empty pixels to color them. Increasing the Inpaint Radius is going to increase the size of the neighborhood around the pixel to be filled. If we increase the Inpaint Radius to "25px", that much larger radius is going to include more white pixels. So, we would expect to see at least a lighter image. However, since "NavierStokes" uses fluid dynamics, this "whiting out" is much less pronounced. With the radius being larger, there's more "fluid" to draw from. But at some point, the void of pixels is filled and the "flow" of pixels into the void should stop.
| 3px Inpaint Radius | 25px Inpaint Radius | |
| Telea | ![]() |
|
| NavierStokes | ![]() |
|
Keep in mind for our example, "Fill" worked just fine. "Inpaint" is more suited to match the removed area to a more complicated background than just white.
Dilation Factor vs. Dilation Factor vs. Mask Dilation Factor
You may have noticed, we skipped over one Shape Removal property, "Dilation Factor". You may have also noticed this term has popped up a lot. There is a "Dilation Factor" property in "Detection Settings". There is a "Dilation Factor" property in the main "Shape Removal" property panel. There is a "Mask Dilation Factor" sub-property under the "Dropout Method" property.
The "Dilation Factor" property in the main Shape Removal property panel, dilates the Shape Mask. This is set to "4" by default. This is why the shape looks bloated when you look at the "Shape Mask" diagnostic image. It is dilated by default in an attempt to account for variations from the sample image and the image trying to be removed. Unlike other dilation factors, this can only be altered to positively dilate the image. The shape mask cannot be eroded in other words.

Refer to below for differences on the various dilation factors within "Shape Removal"
Property Details
There are four configurable properties available to Shape Removal: Detection Settings, Binarization, Dilation Factor, and Dropout Method. Some of these have substantial subproperies available to them. They are all detailed below.
Detection Settings Details

The properties located in "Detection Settings" are used to set sample images to detect on documents and configure how and where they are detected. Pressing the ellipsis button at the end of the property will bring up a new window with the properties listed below.
| Property | Default Value | Information |
| General Properties | ||
| Sample Images | 0 sample images | Here, you will capture sample images of the shape you want to detect. Press the ellipsis button at the end of the property to bring up a new window to add samples. You will select documents from a test batch and lasso the image to be detected. |
| Shape Name | Use this property to type a name used to identify the shape. | |
| Proximity Measure | SAD | This property sets how similarity is determined between sample images and other images. There are three methods available: SAD (or sum of absolute differences), CrossCorr (or normalized cross-correlation), and SSD (or sum of squared distances). Each method uses different different equations to compare the pixels in the sample image to the pixels on the document. SAD is a very simple way to automate searching for sample images. It measures the absolute difference between each pixel in the sample image with the corresponding pixel in the block its being compared to. Potentially, SAD may be unreliable given changes in lighting, color, or image degredation, but is generally the go do method for shape detection. |
| Background Differencing | False | Setting this property to true can help when dealing with shapes with a lot of blank space in the sample image. Shapes containing mostly white space can be challenging. If 90% of the image's pixels are white, the Shape Detection operation will match other regions on a document that also contain 90% white pixels. This can produce a lot of false-positive matches with high confidence that erroneous regions match the sample. When background differencing is enabled, the blank areas of the sample confidence values are scaled according to the color balance of the sample image. If the sample contains 90% white pixels, matched regions on the document falling below 90% confidence are effectively removed as matches. |
| Minimum Confidence | 80% | This is the minimum confidence for a successful match (from 0% to 100%). |
| Orientation and Scale Properties | ||
| Maximum Angle | 0 degrees | This can account for instances when the image on a document is slightly rotated from the sample image's orientation (between 0 and 360 degrees). Altering this property will also allow you to adjust the "Angle Step" during detection. For example, if you set the Maximum Angle to 25 degrees and an Angle Step of 5 degrees, Shape Detection would look for a match that is rotated -25, -20, -15, -10, -5, 0, 5, 10, 15, 20, and 25 degrees from the original image instead of every single degree from -25 to 25. The Maximum Angle must be an even multiple of the Angle Step (as in 5 is an even multiple of 25). |
| Minimum Scale | 100% | This can account for instances when the image on a document is scaled slightly smaller than the sample image (between 10% and 100%). Altering this property will also allow you to adjust the "Scale Step" during detection. For example, if you set the Minimum Scale to 50% and Scale Step to 10%, Shape Detection would look for a match that is 100%, 90%, 80%, 70%, 60%, and 50% the size of the sample image instead of 100%, 99%, 98%, and so on. |
| Maximum Scale | 100% | This can account for instances when the image on a document is scaled slightly larger than the sample image (between 100% and 400%). Altering this property will also allow you to adjust the "Scale Step" during detection. For example, if you set the Maximum Scale to 150% and Scale Step to 10%, Shape Detection would look for a match that is 100%, 110%, 120%, 130%, 140%, and 150% the size of the sample image instead of 100%, 101%, 102%, and so on. |
| Preprocessing Properties | ||
| Processing Resolution | Dpi50 | This sets the resolution at which the image is processed during Shape Detection. This does not change the output resolution of the document itself. It only effects the resolution when Grooper is looking for match to the sample image. A higher dpi will force a more specific 1:1 match to the sample image. A lower resolution will allow for a "looser" or "fuzzier" match, accounting for differences in the quality of the sample compared to the document set. |
| Binarization | Disabled | Binarization converts color images to black and white by "thresholding" the image. Searching for a flat black and white shape on a (also binarized) black and white document may end up producing more accurate results. This does not binarize the document itself, it only does so temporarily for Shape Detection. After detection is performed, the image reverts to its original form.
Thresholding is the process of setting a threshold value on the pixel intensity of the original image. Pixel intensity is a pixel's "lightness" or "brightness". Essentially, once a midpoint between the most intense ("whitest") and least intense ("blackest") pixel on a page is established, lighter pixels are converted to white and darker are converted to black. Or put another way, pixels with an intensity value above the threshold are converted to white, and those below the threshold are converted to black. This midpoint (or "threshold") can be set manually or found automatically by a software application. The Thresholding Method can be set to one of four ways:
Each method has its own set of configurable properties. For more information on binarization and these methods, visit the Binarize article. |
| Dilation Factor | 0 | "Dilation Factor" will bloat the edges of the sample image. This is another way of getting a "fuzzy" match from the sample. It will increase the range of pixels possible to produce a match along a shape's edge. |
| Region of Interest (inches) | (0,0) : (0,0) | If you know the physical location (give or take) the shape will be on a document, you can limit where Grooper looks for it using the "Region of Interest" property. Pressing the ellipsis button at the end of the property will bring up a new window that allows you to lasso the area you expect to find the shape with your mouse. |
Binarization Details

Binarization converts color images to black and white by "thresholding" the image. Once a sample shape is found on a document, the document is binarized in order to target the pixels to be removed.
Thresholding is the process of setting a threshold value on the pixel intensity of the original image. Pixel intensity is a pixel's "lightness" or "brightness". Essentially, once a midpoint between the most intense ("whitest") and least intense ("blackest") pixel on a page is established, lighter pixels are converted to white and darker are converted to black. Or put another way, pixels with an intensity value above the threshold are converted to white, and those below the threshold are converted to black. This midpoint (or "threshold") can be set manually or found automatically by a software application. The Thresholding Method can be set to one of four ways:
- Simple - Thresholds an image to black and white using a fixed threshold value between 1 and 255.
- Auto - Selects a threshold value automatically using Otsu's Method.
- Adaptive - Thesholds pixels based on the intensity of pixels in the local neighborhood.
- Dynamic - Performs adaptive thresholding, while preserving dark areas on the page.
Each method has its own set of configurable properties. For more information on binarization and these methods, visit the Binarize article.
Dilation Factor Details

Dilation Factor here (in the main Shape Removal property panel) controls how dilated the Shape Mask is. The Shape Mask is overlaid on a binarized document after one of the sample shapes was detected. All pixels falling under the Shape Mask will be dropped out. Dilating the mask widens the sample image, adding a pixel border around it, effectively expanding its edges. Since all pixels underneath the Shape Mask will be removed, dilating it can account for small variations between the sample image and the image being removed. The objective is to bloat the Shape Mask enough to intersect these small variations, but not too much to intersect other meaningful features on the page, such as text. Only positive numbers are allowed here. Meaning the Shape Mask can only be dilated, not eroded.
Dropout Method Details

This property determines how pixels targeted for removal during the dropout operation will be "removed". They are not removed in that they are deleted. They are removed in that they are colored in to match the image's background. This can be set to "Fill" or "Inpaint".
|
The "Fill" method replaces dropped out pixels with a given color. The "Fill Color" property determines what color is used to fill the pixels. It defaults to a color determined to match the image's background. Alternatively, you can pick which color to fill the pixels. The "Mask Dilation Factor" will dilate the filled shape. Colored pixels will be added to the shape's borders, increasing the size of the removed area. |
|
"Inpaint" fills the dropout mask using color information from pixels around the removed pixels. This method is designed to match removed pixels to a colored or complex background. Student transcripts are a great example. They often are printed on paper with some kind of patterned background. There are two "Inpaint Method" options: Telea and NavierStokes.
The "Inpaint Radius" property specifies how large the area around the dropped out pixels Grooper is "looking at" to get a picture of how to fill it in. It increases the size of the analyzed "neighborhood" of pixels. The "Mask Dilation Factor" will dilate the filled shape. Colored pixels will be added to the shape's borders, increasing the size of the removed area. |







































