Accord.Imaging
Public-Domain test images for image processing applications.
This dataset contains famous images used in the image processing literature, such as
Lena Söderberg picture.
Using this class, you can retrieve any of the following famous test images:
airplane.png
arctichare.png
baboon.png
barbara.bmp
barbara.png
boat.png
boy.bmp
boy.ppm
cameraman.tif
cat.png
fprint3.pgm
fruits.png
frymire.png
girl.png
goldhill.bmp
goldhill.png
lena.bmp
lenacolor.png
lena.ppm
Lenaclor.ppm
monarch.png
mountain.png
mountain.bmp
p64int.txt
peppers.png
pool.png
sails.bmp
sails.png
serrano.png
tulips.png
us021.pgm
us092.pgm
watch.png
zelda.png
References:
-
ECE533 Digital Image Processing. "Public-Domain Test Images for Homeworks and Projects.",
University of Wisconsin-Madison, Fall 2012.
Gets all the image names that can be passed to
the method.
The image names in this dataset.
Gets or sets whether images with non-standard color palettes (i.e. 8-bpp images where
values do not represent intensity values but rather indices in a color palette) should
be converted to true 8-bpp grayscale. Default is true.
Downloads and prepares the test images dataset.
The path where datasets will be stored. If null or empty, the dataset
will be saved on a subfolder called "data" in the current working directory.
Gets the example with the specified name.
The standard image name. For a list of all possible names, see .
Gets the example image.
The standard image name. For a list of all possible names, see .
Base class for corner detectors implementing the interface.
Corner detectors can be seen as the simplest sparse feature extractors, where the extracted
features are the (x,y) positions themselves.
Process image looking for corners.
Source image to process.
Returns list of found corners (X-Y coordinates).
The source image has incorrect pixel format.
This method should be implemented by inheriting classes to implement the
actual corners detection, transforming the input image into a list of points.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Process image looking for corners.
Source image data to process.
Returns list of found corners (X-Y coordinates).
The source image has incorrect pixel format.
Process image looking for corners.
Source image data to process.
Returns list of found corners (X-Y coordinates).
The source image has incorrect pixel format.
Divide filter - divide pixel values of two images.
The divide filter takes two images (source and overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to the division value of corresponding pixels from provided images:
- For 8bpp: (srcPix * 255f + 1f) / (ovrPix + 1f),
- For 16bpp: (srcPix * 65535f + 1f) / (ovrPix + 1f).
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Divide filter = new Divide(overlayImage);
// apply the filter
Bitmap resultImage = filter.Apply(sourceImage);
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Multiply filter - multiply pixel values of two images.
The multiply filter takes two images (source and overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to the multiplication value of corresponding pixels from provided images.
- For 8bpp: (srcPix * ovrPix) / 255,
- For 16bpp: (srcPix * ovrPix) / 65535.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Multiply filter = new Multiply(overlayImage);
// apply the filter
Bitmap resultImage = filter.Apply(sourceImage);
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Fast Box Blur filter.
Reference: http://www.vcskicks.com/box-blur.php
Format translations dictionary.
Horizontal kernel size between 3 and 99.
Default value is 3.
Vertical kernel size between 3 and 99.
Default value is 3.
Initializes a new instance of the class.
Initializes a new instance of the class.
Horizontal kernel size.
Vertical kernel size.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Zhang-Suen skeletonization filter.
Zhang-Suen Thinning Algorithm. The filter uses
and colors to distinguish
between object and background.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
ZhangSuenSkeletonization filter = new ZhangSuenSkeletonization( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
Background pixel color.
The property sets background (none object) color to look for.
Default value is set to 0 - black.
Foreground pixel color.
The property sets objects' (none background) color to look for.
Default value is set to 255 - white.
Initializes a new instance of the class.
Initializes a new instance of the class.
Background pixel color.
Foreground pixel color.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Add fillter - add pixel values of two images.
The add filter takes two images (source and overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to the sum value of corresponding pixels from provided images (if sum is greater
than maximum allowed value, 255 or 65535, then it is truncated to that maximum).
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Add filter = new Add( overlayImage );
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Difference filter - get the difference between overlay and source images.
The difference filter takes two images (source and
overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to absolute difference between corresponding pixels from provided images.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
In the case if images with alpha channel are used (32 or 64 bpp), visualization
of the result image may seem a bit unexpected - most probably nothing will be seen
(in the case if image is displayed according to its alpha channel). This may be
caused by the fact that after differencing the entire alpha channel will be zeroed
(zero difference between alpha channels), what means that the resulting image will be
100% transparent.
Sample usage:
// create filter
Difference filter = new Difference( overlayImage );
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Intersect filter - get MIN of pixels in two images.
The intersect filter takes two images (source and overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to the minimum value of corresponding pixels from provided images.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Intersect filter = new Intersect( overlayImage );
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Merge filter - get MAX of pixels in two images.
The merge filter takes two images (source and overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to the maximum value of corresponding pixels from provided images.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Merge filter = new Merge( overlayImage );
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Initializes a new instance of the class
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Morph filter.
The filter combines two images by taking
specified percent of pixels' intensities from source
image and the rest from overlay image. For example, if the
source percent value is set to 0.8, then each pixel
of the result image equals to 0.8 * source + 0.2 * overlay, where source
and overlay are corresponding pixels' values in source and overlay images.
The filter accepts 8 bpp grayscale and 24 bpp color images for processing.
Sample usage:
// create filter
Morph filter = new Morph( overlayImage );
filter.SourcePercent = 0.75;
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Percent of source image to keep, [0, 1].
The property specifies the percentage of source pixels' to take. The
rest is taken from an overlay image.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Move towards filter.
The result of this filter is an image, which is based on source image,
but updated in the way to decrease diffirence with overlay image - source image is
moved towards overlay image. The update equation is defined in the next way:
res = src + Min( Abs( ovr - src ), step ) * Sign( ovr - src ).
The bigger is step size value the more resulting
image will look like overlay image. For example, in the case if step size is equal
to 255 (or 65535 for images with 16 bits per channel), the resulting image will be
equal to overlay image regardless of source image's pixel values. In the case if step
size is set to 1, the resulting image will very little differ from the source image.
But, in the case if the filter is applied repeatedly to the resulting image again and
again, it will become equal to overlay image in maximum 255 (65535 for images with 16
bits per channel) iterations.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
MoveTowards filter = new MoveTowards( overlayImage, 20 );
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Step size, [0, 65535].
The property defines the maximum amount of changes per pixel in the source image.
Default value is set to 1.
Initializes a new instance of the class
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Overlay image.
Step size.
Initializes a new instance of the class.
Unmanaged overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Step size.
Process the filter on the specified image.
Source image data.
Overlay image data.
Stereo anaglyph filter.
The image processing filter produces stereo anaglyph images which are
aimed to be viewed through anaglyph glasses with red filter over the left eye and
cyan over the right.
The stereo image is produced by combining two images of the same scene taken
from a bit different points. The right image must be provided to the filter using
property, but the left image must be provided to
method, which creates the anaglyph image.
The filter accepts 24 bpp color images for processing.
See enumeration for the list of supported anaglyph algorithms.
Sample usage:
// create filter
StereoAnaglyph filter = new StereoAnaglyph( );
// set right image as overlay
filter.Overlay = rightImage
// apply the filter (providing left image)
Bitmap resultImage = filter.Apply( leftImage );
Source image (left):
Overlay image (right):
Result image:
Enumeration of algorithms for creating anaglyph images.
See anaglyph methods comparison for
descipton of different algorithms.
Creates anaglyph image using the below calculations:
- Ra=0.299*Rl+0.587*Gl+0.114*Bl;
- Ga=0;
- Ba=0.299*Rr+0.587*Gr+0.114*Br.
Creates anaglyph image using the below calculations:
- Ra=0.299*Rl+0.587*Gl+0.114*Bl;
- Ga=0.299*Rr+0.587*Gr+0.114*Br;
- Ba=0.299*Rr+0.587*Gr+0.114*Br.
Creates anaglyph image using the below calculations:
- Ra=Rl;
- Ga=Gr;
- Ba=Br.
Creates anaglyph image using the below calculations:
- Ra=0.299*Rl+0.587*Gl+0.114*Bl;
- Ga=Gr;
- Ba=Br.
Creates anaglyph image using the below calculations:
- Ra=0.7*Gl+0.3*Bl;
- Ga=Gr;
- Ba=Br.
Algorithm to use for creating anaglyph images.
Default value is set to .
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Algorithm to use for creating anaglyph images.
Process the filter on the specified image.
Source image data (left image).
Overlay image data (right image).
Subtract filter - subtract pixel values of two images.
The subtract filter takes two images (source and overlay images)
of the same size and pixel format and produces an image, where each pixel equals
to the difference value of corresponding pixels from provided images (if difference is less
than minimum allowed value, 0, then it is truncated to that minimum value).
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Subtract filter = new Subtract( overlayImage );
// apply the filter
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Overlay image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Overlay image data.
Calculate difference between two images and threshold it.
The filter produces similar result as applying filter and
then filter - thresholded difference between two images. Result of this
image processing routine may be useful in motion detection applications or finding areas of significant
difference.
The filter accepts 8 and 24/32color images for processing.
In the case of color images, the image processing routine differences sum over 3 RGB channels (Manhattan distance), i.e.
|diffR| + |diffG| + |diffB|.
Sample usage:
// create filter
ThresholdedDifference filter = new ThresholdedDifference( 60 );
// apply the filter
filter.OverlayImage = backgroundImage;
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Background image:
Result image:
Difference threshold.
The property specifies difference threshold. If difference between pixels of processing image
and overlay image is greater than this value, then corresponding pixel of result image is set to white; otherwise
black.
Default value is set to 15.
Number of pixels which were set to white in destination image during last image processing call.
The property may be useful to determine amount of difference between two images which,
for example, may be treated as amount of motion in motion detection applications, etc.
Format translations dictionary.
See for more information.
Initializes a new instance of the class.
Initializes a new instance of the class.
Difference threshold (see ).
Process the filter on the specified image.
Source image data.
Overlay image data.
Destination image data
Calculate Euclidean difference between two images and threshold it.
The filter produces similar to , however it uses
Euclidean distance for finding difference between pixel values instead of Manhattan distance. Result of this
image processing routine may be useful in motion detection applications or finding areas of significant
difference.
The filter accepts 8 and 24/32color images for processing.
Sample usage:
// create filter
ThresholdedEuclideanDifference filter = new ThresholdedEuclideanDifference( 60 );
// apply the filter
filter.OverlayImage = backgroundImage;
Bitmap resultImage = filter.Apply( sourceImage );
Source image:
Background image:
Result image:
Difference threshold.
The property specifies difference threshold. If difference between pixels of processing image
and overlay image is greater than this value, then corresponding pixel of result image is set to white; otherwise
black.
Default value is set to 15.
Number of pixels which were set to white in destination image during last image processing call.
The property may be useful to determine amount of difference between two images which,
for example, may be treated as amount of motion in motion detection applications, etc.
Format translations dictionary.
See for more information.
Initializes a new instance of the class.
Initializes a new instance of the class.
Difference threshold (see ).
Process the filter on the specified image.
Source image data.
Overlay image data.
Destination image data
Adaptive thresholding using the internal image.
The image processing routine implements local thresholding technique described
by Derek Bradley and Gerhard Roth in the "Adaptive Thresholding Using the Integral Image" paper.
The brief idea of the algorithm is that every image's pixel is set to black if its brightness
is t percent lower (see ) than the average brightness
of surrounding pixels in the window of the specified size (see ), othwerwise it is set
to white.
Sample usage:
// create the filter
BradleyLocalThresholding filter = new BradleyLocalThresholding( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Window size to calculate average value of pixels for.
The property specifies window size around processing pixel, which determines number of
neighbor pixels to use for calculating their average brightness.
Default value is set to 41.
The value should be odd.
Brightness difference limit between processing pixel and average value across neighbors.
The property specifies what is the allowed difference percent between processing pixel
and average brightness of neighbor pixels in order to be set white. If the value of the
current pixel is t percent (this property value) lower than the average then it is set
to black, otherwise it is set to white.
Default value is set to 0.15.
Format translations dictionary.
See for more information.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Iterative threshold search and binarization.
The algorithm works in the following way:
- select any start threshold;
- compute average value of Background (µB) and Object (µO) values:
1) all pixels with a value that is below threshold, belong to the Background values;
2) all pixels greater or equal threshold, belong to the Object values.
- calculate new thresghold: (µB + µO) / 2;
- if |oldThreshold - newThreshold| is less than a given manimum allowed error, then stop iteration process
and create the binary image with the new threshold.
For additional information see Digital Image Processing, Gonzalez/Woods. Ch.10 page:599.
The filter accepts 8 and 16 bpp grayscale images for processing.
Since the filter can be applied as to 8 bpp and to 16 bpp images,
the initial value of property should be set appropriately to the
pixel format. In the case of 8 bpp images the threshold value is in the [0, 255] range, but
in the case of 16 bpp images the threshold value is in the [0, 65535] range.
Sample usage:
// create filter
IterativeThreshold filter = new IterativeThreshold( 2, 128 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image (calculated threshold is 102):
Minimum error, value when iterative threshold search is stopped.
Default value is set to 0.
Initializes a new instance of the class.
Initializes a new instance of the class.
Minimum allowed error, that ends the iteration process.
Initializes a new instance of the class.
Minimum allowed error, that ends the iteration process.
Initial threshold value.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should
8 bpp grayscale (indexed) or 16 bpp grayscale image.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should
8 bpp grayscale (indexed) or 16 bpp grayscale image.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should
8 bpp grayscale (indexed) or 16 bpp grayscale image.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Otsu thresholding.
The class implements Otsu thresholding, which is described in
N. Otsu, "A threshold selection method from gray-level histograms", IEEE Trans. Systems,
Man and Cybernetics 9(1), pp. 62–66, 1979.
This implementation instead of minimizing the weighted within-class variance
does maximization of between-class variance, what gives the same result. The approach is
described in this presentation.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
OtsuThreshold filter = new OtsuThreshold( );
// apply the filter
filter.ApplyInPlace( image );
// check threshold value
byte t = filter.ThresholdValue;
// ...
Initial image:
Result image (calculated threshold is 97):
Format translations dictionary.
Threshold value.
The property is read only and represents the value, which
was automaticaly calculated using Otsu algorithm.
Initializes a new instance of the class.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should be
8 bpp grayscale (indexed) image.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should be
8 bpp grayscale (indexed) image.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should be
8 bpp grayscale (indexed) image.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Threshold using Simple Image Statistics (SIS).
The filter performs image thresholding calculating threshold automatically
using simple image statistics method. For each pixel:
- two gradients are calculated - ex = |I(x + 1, y) - I(x - 1, y)| and
|I(x, y + 1) - I(x, y - 1)|;
- weight is calculated as maximum of two gradients;
- sum of weights is updated (weightTotal += weight);
- sum of weighted pixel values is updated (total += weight * I(x, y)).
The result threshold is calculated as sum of weighted pixel values divided by sum of weight.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
SISThreshold filter = new SISThreshold( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image (calculated threshold is 127):
Format translations dictionary.
Threshold value.
The property is read only and represents the value, which
was automaticaly calculated using image statistics.
Initializes a new instance of the class.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should be
8 bpp grayscale (indexed) image.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should be
8 bpp grayscale (indexed) image.
Calculate binarization threshold for the given image.
Image to calculate binarization threshold for.
Rectangle to calculate binarization threshold for.
Returns binarization threshold.
The method is used to calculate binarization threshold only. The threshold
later may be applied to the image using image processing filter.
Source pixel format is not supported by the routine. It should be
8 bpp grayscale (indexed) image.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Base class for filters, which produce new image of the same size as a
result of image processing.
The abstract class is the base class for all filters, which
do image processing creating new image with the same size as source.
Filters based on this class cannot be applied directly to the source
image, which is kept unchanged.
The base class itself does not define supported pixel formats of source
image and resulting pixel formats of destination image. Filters inheriting from
this base class, should specify supported pixel formats and their transformations
overriding abstract property.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See for more information.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Process the filter on the specified image.
Source image data.
Destination image data.
Base class for filters, which operate with two images of the same size and format and
produce new image as a result.
The abstract class is the base class for all filters, which can
be applied to an image producing new image as a result of image processing.
The base class is aimed for such type of filters, which require additional image
to process the source image. The additional image is set by
or property and must have the same size and pixel format
as source image. See documentation of particular inherited class for information
about overlay image purpose.
Overlay image.
The property sets an overlay image, which will be used as the second image required
to process source image. See documentation of particular inherited class for information
about overlay image purpose.
Overlay image must have the same size and pixel format as source image.
Otherwise exception will be generated when filter is applied to source image.
Setting this property will clear the property -
only one overlay image is allowed: managed or unmanaged.
Unmanaged overlay image.
The property sets an overlay image, which will be used as the second image required
to process source image. See documentation of particular inherited class for information
about overlay image purpose.
Overlay image must have the same size and pixel format as source image.
Otherwise exception will be generated when filter is applied to source image.
Setting this property will clear the property -
only one overlay image is allowed: managed or unmanaged.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Destination image data.
Process the filter on the specified image.
Source image data.
Overlay image data.
Destination image data
Overlay image size and pixel format is checked by this base class, before
passing execution to inherited class.
Base class for filters, which may be applied directly to the source image.
The abstract class is the base class for all filters, which can
be applied to an image producing new image as a result of image processing or
applied directly to the source image without changing its size and pixel format.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See for more information.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Apply filter to an image.
Image to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image.
Image data to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image.
Unmanaged image to apply filter to.
The method applies the filter directly to the provided source unmanaged image.
Unsupported pixel format of the source image.
Process the filter on the specified image.
Source image data.
Base class for filters, which operate with two images of the same size and format and
may be applied directly to the source image.
The abstract class is the base class for all filters, which can
be applied to an image producing new image as a result of image processing or
applied directly to the source image without changing its size and pixel format.
The base class is aimed for such type of filters, which require additional image
to process the source image. The additional image is set by
or property and must have the same size and pixel format
as source image. See documentation of particular inherited class for information
about overlay image purpose.
Overlay image.
The property sets an overlay image, which will be used as the second image required
to process source image. See documentation of particular inherited class for information
about overlay image purpose.
Overlay image must have the same size and pixel format as source image.
Otherwise exception will be generated when filter is applied to source image.
Setting this property will clear the property -
only one overlay image is allowed: managed or unmanaged.
Unmanaged overlay image.
The property sets an overlay image, which will be used as the second image required
to process source image. See documentation of particular inherited class for information
about overlay image purpose.
Overlay image must have the same size and pixel format as source image.
Otherwise exception will be generated when filter is applied to source image.
Setting this property will clear the property -
only one overlay image is allowed: managed or unmanaged.
Initializes a new instance of the class.
Initializes a new instance of the class.
Overlay image.
Initializes a new instance of the class.
Unmanaged overlay image.
Process the filter on the specified image.
Source image data.
Source and overlay images have different pixel formats and/or size.
Overlay image is not set.
Process the filter on the specified image.
Source image data.
Overlay image data.
Overlay image size and pixel format is checked by this base class, before
passing execution to inherited class.
Base class for filters, which may be applied directly to the source image or its part.
The abstract class is the base class for all filters, which can
be applied to an image producing new image as a result of image processing or
applied directly to the source image (or its part) without changing its size and
pixel format.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See for more information.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Apply filter to an image.
Image to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image.
Image data to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image.
Unmanaged image to apply filter to.
The method applies the filter directly to the provided source unmanaged image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image data to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image or its part.
Unmanaged image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Base class for image resizing filters.
The abstract class is the base class for all filters,
which implement image rotation algorithms.
New image width.
New image height.
Width of the new resized image.
Height of the new resized image.
Initializes a new instance of the class.
Width of the new resized image.
Height of the new resize image.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Base class for image rotation filters.
The abstract class is the base class for all filters,
which implement rotating algorithms.
Rotation angle.
Keep image size or not.
Fill color.
Rotation angle, [0, 360].
Keep image size or not.
The property determines if source image's size will be kept
as it is or not. If the value is set to false, then the new image will have
new dimension according to rotation angle. If the valus is set to
true, then the new image will have the same size, which means that some parts
of the image may be clipped because of rotation.
Fill color.
The fill color is used to fill areas of destination image,
which don't have corresponsing pixels in source image.
Initializes a new instance of the class.
Rotation angle.
This constructor sets property to false.
Initializes a new instance of the class.
Rotation angle.
Keep image size or not.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Base class for filters, which may produce new image of different size as a
result of image processing.
The abstract class is the base class for all filters, which
do image processing creating new image of the size, which may differ from the
size of source image. Filters based on this class cannot be applied directly
to the source image, which is kept unchanged.
The base class itself does not define supported pixel formats of source
image and resulting pixel formats of destination image. Filters inheriting from
this base class, should specify supported pixel formats and their transformations
overriding abstract property.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See for more information.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Process the filter on the specified image.
Source image data.
Destination image data.
Base class for filters, which require source image backup to make them applicable to
source image (or its part) directly.
The base class is used for filters, which can not do
direct manipulations with source image. To make effect of in-place filtering,
these filters create a background copy of the original image (done by this
base class) and then do manipulations with it putting result back to the original
source image.
The background copy of the source image is created only in the case of in-place
filtering. Otherwise background copy is not created - source image is processed and result is
put to destination image.
The base class is for those filters, which support as filtering entire image, as
partial filtering of specified rectangle only.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See for more information.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Apply filter to an image.
Image to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image.
Image data to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image.
Unmanaged image to apply filter to.
The method applies the filter directly to the provided source unmanaged image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image data to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image or its part.
Unmanaged image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Ordered dithering using Bayer matrix.
The filter represents filter initialized
with the next threshold matrix:
byte[,] matrix = new byte[4, 4]
{
{ 0, 192, 48, 240 },
{ 128, 64, 176, 112 },
{ 32, 224, 16, 208 },
{ 160, 96, 144, 80 }
};
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
BayerDithering filter = new BayerDithering( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Dithering using Burkes error diffusion.
The filter represents binarization filter, which is based on
error diffusion dithering with Burkes coefficients. Error is diffused
on 7 neighbor pixels with next coefficients:
| * | 8 | 4 |
| 2 | 4 | 8 | 4 | 2 |
/ 32
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
BurkesDithering filter = new BurkesDithering( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Base class for error diffusion dithering.
The class is the base class for binarization algorithms based on
error diffusion.
Binarization with error diffusion in its idea is similar to binarization based on thresholding
of pixels' cumulative value (see ). Each pixel is binarized based not only
on its own value, but on values of some surrounding pixels. During pixel's binarization, its binarization
error is distributed (diffused) to some neighbor pixels with some coefficients. This error diffusion
updates neighbor pixels changing their values, what affects their upcoming binarization. Error diffuses
only on unprocessed yet neighbor pixels, which are right and bottom pixels usually (in the case if image
processing is done from upper left corner to bottom right corner). Binarization error equals
to processing pixel value, if it is below threshold value, or pixel value minus 255 otherwise.
The filter accepts 8 bpp grayscale images for processing.
Threshold value.
Default value is 128.
Current processing X coordinate.
Current processing Y coordinate.
Processing X start position.
Processing Y start position.
Processing X stop position.
Processing Y stop position.
Processing image's stride (line size).
Format translations dictionary.
Initializes a new instance of the class.
Do error diffusion.
Current error value.
Pointer to current processing pixel.
All parameters of the image and current processing pixel's coordinates
are initialized in protected members.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Base class for error diffusion dithering, where error is diffused to
adjacent neighbor pixels.
The class does error diffusion to adjacent neighbor pixels
using specified set of coefficients. These coefficients are represented by
2 dimensional jugged array, where first array of coefficients is for
right-standing pixels, but the rest of arrays are for bottom-standing pixels.
All arrays except the first one should have odd number of coefficients.
Suppose that error diffusion coefficients are represented by the next
jugged array:
int[][] coefficients = new int[2][] {
new int[1] { 7 },
new int[3] { 3, 5, 1 }
};
The above coefficients are used to diffuse error over the next neighbor
pixels (* marks current pixel, coefficients are placed to corresponding
neighbor pixels):
| * | 7 |
| 3 | 5 | 1 |
/ 16
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
ErrorDiffusionToAdjacentNeighbors filter = new ErrorDiffusionToAdjacentNeighbors(
new int[3][] {
new int[2] { 5, 3 },
new int[5] { 2, 4, 5, 4, 2 },
new int[3] { 2, 3, 2 }
} );
// apply the filter
filter.ApplyInPlace( image );
Diffusion coefficients.
Set of coefficients, which are used for error diffusion to
pixel's neighbors.
Initializes a new instance of the class.
Diffusion coefficients.
Do error diffusion.
Current error value.
Pointer to current processing pixel.
All parameters of the image and current processing pixel's coordinates
are initialized by base class.
Dithering using Floyd-Steinberg error diffusion.
The filter represents binarization filter, which is based on
error diffusion dithering with Floyd-Steinberg
coefficients. Error is diffused on 4 neighbor pixels with next coefficients:
| * | 7 |
| 3 | 5 | 1 |
/ 16
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
FloydSteinbergDithering filter = new FloydSteinbergDithering( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Dithering using Jarvis, Judice and Ninke error diffusion.
The filter represents binarization filter, which is based on
error diffusion dithering with Jarvis-Judice-Ninke coefficients. Error is diffused
on 12 neighbor pixels with next coefficients:
| * | 7 | 5 |
| 3 | 5 | 7 | 5 | 3 |
| 1 | 3 | 5 | 3 | 1 |
/ 48
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
JarvisJudiceNinkeDithering filter = new JarvisJudiceNinkeDithering( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Binarization with thresholds matrix.
Idea of the filter is the same as idea of filter -
change pixel value to white, if its intensity is equal or higher than threshold value, or
to black otherwise. But instead of using single threshold value for all pixel, the filter
uses matrix of threshold values. Processing image is divided to adjacent windows of matrix
size each. For pixels binarization inside of each window, corresponding threshold values are
used from specified threshold matrix.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create binarization matrix
byte[,] matrix = new byte[4, 4]
{
{ 95, 233, 127, 255 },
{ 159, 31, 191, 63 },
{ 111, 239, 79, 207 },
{ 175, 47, 143, 15 }
};
// create filter
OrderedDithering filter = new OrderedDithering( matrix );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Thresholds matrix.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Dithering using Sierra error diffusion.
The filter represents binarization filter, which is based on
error diffusion dithering with Sierra coefficients. Error is diffused
on 10 neighbor pixels with next coefficients:
| * | 5 | 3 |
| 2 | 4 | 5 | 4 | 2 |
| 2 | 3 | 2 |
/ 32
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
SierraDithering filter = new SierraDithering( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Dithering using Stucki error diffusion.
The filter represents binarization filter, which is based on
error diffusion dithering with Stucki coefficients. Error is diffused
on 12 neighbor pixels with next coefficients:
| * | 8 | 4 |
| 2 | 4 | 8 | 4 | 2 |
| 1 | 2 | 4 | 2 | 1 |
/ 42
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
StuckiDithering filter = new StuckiDithering( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Threshold binarization.
The filter does image binarization using specified threshold value. All pixels
with intensities equal or higher than threshold value are converted to white pixels. All other
pixels with intensities below threshold value are converted to black pixels.
The filter accepts 8 and 16 bpp grayscale images for processing.
Since the filter can be applied as to 8 bpp and to 16 bpp images,
the value should be set appropriately to the pixel format.
In the case of 8 bpp images the threshold value is in the [0, 255] range, but in the case
of 16 bpp images the threshold value is in the [0, 65535] range.
Sample usage:
// create filter
Threshold filter = new Threshold( 100 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Threshold value.
Format translations dictionary.
Threshold value.
Default value is set to 128.
Initializes a new instance of the class.
Initializes a new instance of the class.
Threshold value.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Threshold binarization with error carry.
The filter is similar to filter in the way,
that it also uses threshold value for image binarization. Unlike regular threshold
filter, this filter uses cumulative pixel value in comparing with threshold value.
If cumulative pixel value is below threshold value, then image pixel becomes black.
If cumulative pixel value is equal or higher than threshold value, then image pixel
becomes white and cumulative pixel value is decreased by 255. In the beginning of each
image line the cumulative value is reset to 0.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
Threshold filter = new Threshold( 100 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Threshold value.
Default value is 128.
Initializes a new instance of the class.
Initializes a new instance of the class.
Threshold value.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Generic Bayer fileter image processing routine.
The class implements Bayer filter
routine, which creates color image out of grayscale image produced by image sensor built with
Bayer color matrix.
This Bayer filter implementation is made generic by allowing user to specify used
Bayer pattern. This makes it slower. For optimized version
of the Bayer filter see class, which implements Bayer filter
specifically optimized for some well known patterns.
The filter accepts 8 bpp grayscale images and produces 24 bpp RGB image.
Sample usage:
// create filter
BayerFilter filter = new BayerFilter( );
// apply the filter
Bitmap rgbImage = filter.Apply( image );
Source image:
Result image:
Specifies if demosaicing must be done or not.
The property specifies if color demosaicing must be done or not.
If the property is set to , then pixels of the result color image
are colored according to the Bayer pattern used, i.e. every pixel
of the source grayscale image is copied to corresponding color plane of the result image.
If the property is set to , then pixels of the result image
are set to color, which is obtained by averaging color components from the 3x3 window - pixel
itself plus 8 surrounding neighbors.
Default value is set to .
Specifies Bayer pattern used for decoding color image.
The property specifies 2x2 array of RGB color indexes, which set the
Bayer patter used for decoding color image.
By default the property is set to:
new int[2, 2] { { RGB.G, RGB.R }, { RGB.B, RGB.G } }
,
which corresponds to
G R
B G
pattern.
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Set of Bayer patterns supported by .
Pattern:
G R
B G
Pattern:
B G
G R
Optimized Bayer fileter image processing routine.
The class implements Bayer filter
routine, which creates color image out of grayscale image produced by image sensor built with
Bayer color matrix.
This class does all the same as class. However this version is
optimized for some well known patterns defined in enumeration.
Also this class processes images with even width and height only. Image size must be at least 2x2 pixels.
The filter accepts 8 bpp grayscale images and produces 24 bpp RGB image.
Sample usage:
// create filter
BayerFilter filter = new BayerFilter( );
// apply the filter
Bitmap rgbImage = filter.Apply( image );
Bayer pattern of source images to decode.
The property specifies Bayer pattern of source images to be
decoded into color images.
Default value is set to .
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Brightness adjusting in RGB color space.
The filter operates in RGB color space and adjusts
pixels' brightness by increasing every pixel's RGB values by the specified
adjust value. The filter is based on
filter and simply sets all input ranges to (0, 255-) and
all output range to (, 255) in the case if the adjust value is positive.
If the adjust value is negative, then all input ranges are set to
(-, 255 ) and all output ranges are set to
( 0, 255+).
See documentation for more information about the base filter.
The filter accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
BrightnessCorrection filter = new BrightnessCorrection( -50 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Brightness adjust value, [-255, 255].
Default value is set to 10, which corresponds to increasing
RGB values of each pixel by 10.
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Initializes a new instance of the class.
Brightness adjust value.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Channels filters.
The filter does color channels' filtering by clearing (filling with
specified values) values, which are inside/outside of the specified value's
range. The filter allows to fill certain ranges of RGB color channels with specified
value.
The filter is similar to , but operates with not
entire pixels, but with their RGB values individually. This means that pixel itself may
not be filtered (will be kept), but one of its RGB values may be filtered if they are
inside/outside of specified range.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
ChannelFiltering filter = new ChannelFiltering( );
// set channels' ranges to keep
filter.Red = new IntRange( 0, 255 );
filter.Green = new IntRange( 100, 255 );
filter.Blue = new IntRange( 100, 255 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Red channel's range.
Red fill value.
Green channel's range.
Green fill value.
Blue channel's range.
Blue fill value.
Determines, if red channel should be filled inside or outside filtering range.
Default value is set to .
Determines, if green channel should be filled inside or outside filtering range.
Default value is set to .
Determines, if blue channel should be filled inside or outside filtering range.
Default value is set to .
Initializes a new instance of the class.
Initializes a new instance of the class.
Red channel's filtering range.
Green channel's filtering range.
Blue channel's filtering range.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Calculate filtering map.
Filtering range.
Fillter value.
Fill outside or inside the range.
Filtering map.
Color filtering.
The filter filters pixels inside/outside of specified RGB color range -
it keeps pixels with colors inside/outside of specified range and fills the rest with
specified color.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
ColorFiltering filter = new ColorFiltering( );
// set color ranges to keep
filter.Red = new IntRange( 100, 255 );
filter.Green = new IntRange( 0, 75 );
filter.Blue = new IntRange( 0, 75 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Range of red color component.
Range of green color component.
Range of blue color component.
Fill color used to fill filtered pixels.
Determines, if pixels should be filled inside or outside of specified
color ranges.
Default value is set to , which means
the filter removes colors outside of the specified range.
Initializes a new instance of the class.
Initializes a new instance of the class.
Red components filtering range.
Green components filtering range.
Blue components filtering range.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Color remapping.
The filter allows to remap colors of the image. Unlike filter
the filter allow to do non-linear remapping. For each pixel of specified image the filter changes
its values (value of each color plane) to values, which are stored in remapping arrays by corresponding
indexes. For example, if pixel's RGB value equals to (32, 96, 128), the filter will change it to
([32], [96], [128]).
The filter accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create map
byte[] map = new byte[256];
for ( int i = 0; i < 256; i++ )
{
map[i] = (byte) Math.Min( 255, Math.Pow( 2, (double) i / 32 ) );
}
// create filter
ColorRemapping filter = new ColorRemapping( map, map, map );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Remapping array for red color plane.
The remapping array should contain 256 remapping values. The remapping occurs
by changing pixel's red value r to [r].
A map should be array with 256 value.
Remapping array for green color plane.
The remapping array should contain 256 remapping values. The remapping occurs
by changing pixel's green value g to [g].
A map should be array with 256 value.
Remapping array for blue color plane.
The remapping array should contain 256 remapping values. The remapping occurs
by changing pixel's blue value b to [b].
A map should be array with 256 value.
Remapping array for gray color.
The remapping array should contain 256 remapping values. The remapping occurs
by changing pixel's value g to [g].
The gray map is for grayscale images only.
A map should be array with 256 value.
Initializes a new instance of the class.
Initializes the filter without any remapping. All
pixel values are mapped to the same values.
Initializes a new instance of the class.
Red map.
Green map.
Blue map.
Initializes a new instance of the class.
Gray map.
This constructor is supposed for grayscale images.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Contrast adjusting in RGB color space.
The filter operates in RGB color space and adjusts
pixels' contrast value by increasing RGB values of bright pixel and decreasing
RGB values of dark pixels (or vise versa if contrast needs to be decreased).
The filter is based on
filter and simply sets all input ranges to (, 255-) and
all output range to (0, 255) in the case if the factor value is positive.
If the factor value is negative, then all input ranges are set to
(0, 255 ) and all output ranges are set to
(-, 255_).
See documentation forr more information about the base filter.
The filter accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
ContrastCorrection filter = new ContrastCorrection( 15 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Contrast adjusting factor, [-127, 127].
Factor which is used to adjust contrast. Factor values greater than
0 increase contrast making light areas lighter and dark areas darker. Factor values
less than 0 decrease contrast - decreasing variety of contrast.
Default value is set to 10.
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Initializes a new instance of the class.
Contrast adjusting factor.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Contrast stretching filter.
Contrast stretching (or as it is often called normalization) is a simple image enhancement
technique that attempts to improve the contrast in an image by 'stretching' the range of intensity values
it contains to span a desired range of values, e.g. the full range of pixel values that the image type
concerned allows. It differs from the more sophisticated histogram equalization
in that it can only apply a linear scaling function to the image pixel values.
The result of this filter may be achieved by using class, which allows to
get pixels' intensities histogram, and filter, which does linear correction
of pixel's intensities.
The filter accepts 8 bpp grayscale and 24 bpp color images.
Sample usage:
// create filter
ContrastStretch filter = new ContrastStretch( );
// process image
filter.ApplyInPlace( sourceImage );
Source image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Euclidean color filtering.
The filter filters pixels, which color is inside/outside
of RGB sphere with specified center and radius - it keeps pixels with
colors inside/outside of the specified sphere and fills the rest with
specified color.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
EuclideanColorFiltering filter = new EuclideanColorFiltering( );
// set center colol and radius
filter.CenterColor = new RGB( 215, 30, 30 );
filter.Radius = 100;
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
RGB sphere's radius, [0, 450].
Default value is 100.
RGB sphere's center.
Default value is (255, 255, 255) - white color.
Fill color used to fill filtered pixels.
Determines, if pixels should be filled inside or outside specified
RGB sphere.
Default value is set to , which means
the filter removes colors outside of the specified range.
Initializes a new instance of the class.
Initializes a new instance of the class.
RGB sphere's center.
RGB sphere's radius.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Extract RGB channel from image.
Extracts specified channel of color image and returns
it as grayscale image.
The filter accepts 24, 32, 48 and 64 bpp color images and produces
8 (if source is 24 or 32 bpp image) or 16 (if source is 48 or 64 bpp image)
bpp grayscale image.
Sample usage:
// create filter
ExtractChannel filter = new ExtractChannel( RGB.G );
// apply the filter
Bitmap channelImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
ARGB channel to extract.
Default value is set to .
Invalid channel is specified.
Initializes a new instance of the class.
Initializes a new instance of the class.
ARGB channel to extract.
Process the filter on the specified image.
Source image data.
Destination image data.
Can not extract alpha channel from none ARGB image. The
exception is throw, when alpha channel is requested from RGB image.
Gamma correction filter.
The filter performs gamma correction
of specified image in RGB color space. Each pixels' value is converted using the Vout=Ving
equation, where g is gamma value.
The filter accepts 8 bpp grayscale and 24 bpp color images for processing.
Sample usage:
// create filter
GammaCorrection filter = new GammaCorrection( 0.5 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Gamma value, [0.1, 5.0].
Default value is set to 2.2.
Initializes a new instance of the class.
Initializes a new instance of the class.
Gamma value.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Base class for image grayscaling.
This class is the base class for image grayscaling. Other
classes should inherit from this class and specify RGB
coefficients used for color image conversion to grayscale.
The filter accepts 24, 32, 48 and 64 bpp color images and produces
8 (if source is 24 or 32 bpp image) or 16 (if source is 48 or 64 bpp image)
bpp grayscale image.
Sample usage:
// create grayscale filter (BT709)
Grayscale filter = new Grayscale( 0.2125, 0.7154, 0.0721 );
// apply the filter
Bitmap grayImage = filter.Apply( image );
Initial image:
Result image:
Set of predefined common grayscaling algorithms, which have
already initialized grayscaling coefficients.
Grayscale image using BT709 algorithm.
The instance uses BT709 algorithm to convert color image
to grayscale. The conversion coefficients are:
- Red: 0.2125;
- Green: 0.7154;
- Blue: 0.0721.
Sample usage:
// apply the filter
Bitmap grayImage = Grayscale.CommonAlgorithms.BT709.Apply( image );
Grayscale image using R-Y algorithm.
The instance uses R-Y algorithm to convert color image
to grayscale. The conversion coefficients are:
- Red: 0.5;
- Green: 0.419;
- Blue: 0.081.
Sample usage:
// apply the filter
Bitmap grayImage = Grayscale.CommonAlgorithms.RMY.Apply( image );
Grayscale image using Y algorithm.
The instance uses Y algorithm to convert color image
to grayscale. The conversion coefficients are:
- Red: 0.299;
- Green: 0.587;
- Blue: 0.114.
Sample usage:
// apply the filter
Bitmap grayImage = Grayscale.CommonAlgorithms.Y.Apply( image );
Portion of red channel's value to use during conversion from RGB to grayscale.
Portion of green channel's value to use during conversion from RGB to grayscale.
Portion of blue channel's value to use during conversion from RGB to grayscale.
Format translations dictionary.
Initializes a new instance of the class.
Red coefficient.
Green coefficient.
Blue coefficient.
Process the filter on the specified image.
Source image data.
Destination image data.
Grayscale image using BT709 algorithm.
The class uses BT709 algorithm to convert color image
to grayscale. The conversion coefficients are:
- Red: 0.2125;
- Green: 0.7154;
- Blue: 0.0721.
Initializes a new instance of the class.
Grayscale image using R-Y algorithm.
The class uses R-Y algorithm to convert color image
to grayscale. The conversion coefficients are:
- Red: 0.5;
- Green: 0.419;
- Blue: 0.081.
Initializes a new instance of the class.
Convert grayscale image to RGB.
The filter creates color image from specified grayscale image
initializing all RGB channels to the same value - pixel's intensity of grayscale image.
The filter accepts 8 bpp grayscale images and produces
24 bpp RGB image.
Sample usage:
// create filter
GrayscaleToRGB filter = new GrayscaleToRGB( );
// apply the filter
Bitmap rgbImage = filter.Apply( image );
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Grayscale image using Y algorithm.
The class uses Y algorithm to convert color image
to grayscale. The conversion coefficients are:
- Red: 0.299;
- Green: 0.587;
- Blue: 0.114.
Initializes a new instance of the class.
Histogram equalization filter.
The filter does histogram equalization increasing local contrast in images. The effect
of histogram equalization can be better seen on images, where pixel values have close contrast values.
Through this adjustment, pixels intensities can be better distributed on the histogram. This allows for
areas of lower local contrast to gain a higher contrast without affecting the global contrast.
The filter accepts 8 bpp grayscale images and 24/32 bpp
color images for processing.
For color images the histogram equalization is applied to each color plane separately.
Sample usage:
// create filter
HistogramEqualization filter = new HistogramEqualization( );
// process image
filter.ApplyInPlace( sourceImage );
Source image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Invert image.
The filter inverts colored and grayscale images.
The filter accepts 8, 16 bpp grayscale and 24, 48 bpp color images for processing.
Sample usage:
// create filter
Invert filter = new Invert( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Linear correction of RGB channels.
The filter performs linear correction of RGB channels by mapping specified
channels' input ranges to output ranges. It is similar to the
, but the remapping is linear.
The filter accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
LevelsLinear filter = new LevelsLinear( );
// set ranges
filter.InRed = new IntRange( 30, 230 );
filter.InGreen = new IntRange( 50, 240 );
filter.InBlue = new IntRange( 10, 210 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Red component's input range.
Green component's input range.
Blue component's input range.
Gray component's input range.
Input range for RGB components.
The property allows to set red, green and blue input ranges to the same value.
Red component's output range.
Green component's output range.
Blue component's output range.
Gray component's output range.
Output range for RGB components.
The property allows to set red, green and blue output ranges to the same value.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Calculate conversion map.
Input range.
Output range.
Conversion map.
Linear correction of RGB channels for images, which have 16 bpp planes (16 bit gray images or 48/64 bit colour images).
The filter performs linear correction of RGB channels by mapping specified
channels' input ranges to output ranges. This version of the filter processes only images
with 16 bpp colour planes. See for 8 bpp version.
The filter accepts 16 bpp grayscale and 48/64 bpp colour images for processing.
Sample usage:
// create filter
LevelsLinear16bpp filter = new LevelsLinear16bpp( );
// set ranges
filter.InRed = new IntRange( 3000, 42000 );
filter.InGreen = new IntRange( 5000, 37500 );
filter.InBlue = new IntRange( 1000, 60000 );
// apply the filter
filter.ApplyInPlace( image );
Format translations dictionary.
Red component's input range.
Green component's input range.
Blue component's input range.
Gray component's input range.
Input range for RGB components.
The property allows to set red, green and blue input ranges to the same value.
Red component's output range.
Green component's output range.
Blue component's output range.
Gray component's output range.
Output range for RGB components.
The property allows to set red, green and blue output ranges to the same value.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Calculate conversion map.
Input range.
Output range.
Conversion map.
Replace RGB channel of color imgae.
Replaces specified RGB channel of color image with
specified grayscale image.
The filter is quite useful in conjunction with filter
(however may be used alone in some cases). Using the filter
it is possible to extract one of RGB channel, perform some image processing with it and then
put it back into the original color image.
The filter accepts 24, 32, 48 and 64 bpp color images for processing.
Sample usage:
// extract red channel
ExtractChannel extractFilter = new ExtractChannel( RGB.R );
Bitmap channel = extractFilter.Apply( image );
// threshold channel
Threshold thresholdFilter = new Threshold( 230 );
thresholdFilter.ApplyInPlace( channel );
// put the channel back
ReplaceChannel replaceFilter = new ReplaceChannel( RGB.R, channel );
replaceFilter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
ARGB channel to replace.
Default value is set to .
Invalid channel is specified.
Grayscale image to use for channel replacement.
Setting this property will clear the property -
only one channel image is allowed: managed or unmanaged.
Channel image should be 8 bpp indexed or 16 bpp grayscale image.
Unmanaged grayscale image to use for channel replacement.
Setting this property will clear the property -
only one channel image is allowed: managed or unmanaged.
Channel image should be 8 bpp indexed or 16 bpp grayscale image.
Initializes a new instance of the class.
ARGB channel to replace.
Initializes a new instance of the class.
ARGB channel to replace.
Channel image to use for replacement.
Initializes a new instance of the class.
RGB channel to replace.
Unmanaged channel image to use for replacement.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Channel image was not specified.
Channel image size does not match source
image size.
Channel image's format does not correspond to format of the source image.
Can not replace alpha channel of none ARGB image. The
exception is throw, when alpha channel is requested to be replaced in RGB image.
Rotate RGB channels.
The filter rotates RGB channels: red channel is replaced with green,
green channel is replaced with blue, blue channel is replaced with red.
The filter accepts 24/32 bpp color images for processing.
Sample usage:
// create filter
RotateChannels filter = new RotateChannels( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Sepia filter - old brown photo.
The filter makes an image look like an old brown photo. The main
idea of the algorithm:
- transform to YIQ color space;
- modify it;
- transform back to RGB.
1) RGB -> YIQ:
Y = 0.299 * R + 0.587 * G + 0.114 * B
I = 0.596 * R - 0.274 * G - 0.322 * B
Q = 0.212 * R - 0.523 * G + 0.311 * B
2) update:
I = 51
Q = 0
3) YIQ -> RGB:
R = 1.0 * Y + 0.956 * I + 0.621 * Q
G = 1.0 * Y - 0.272 * I - 0.647 * Q
B = 1.0 * Y - 1.105 * I + 1.702 * Q
The filter accepts 24/32 bpp color images for processing.
Sample usage:
// create filter
Sepia filter = new Sepia( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Simple posterization of an image.
The class implements simple posterization of an image by splitting
each color plane into adjacent areas of the specified size. After the process
is done, each color plane will contain maximum of 256/PosterizationInterval levels.
For example, if grayscale image is posterized with posterization interval equal to 64,
then result image will contain maximum of 4 tones. If color image is posterized with the
same posterization interval, then it will contain maximum of 43=64 colors.
See property to get information about the way how to control
color used to fill posterization areas.
Posterization is a process in photograph development which converts normal photographs
into an image consisting of distinct, but flat, areas of different tones or colors.
The filter accepts 8 bpp grayscale and 24/32 bpp color images.
Sample usage:
// create filter
SimplePosterization filter = new SimplePosterization( );
// process image
filter.ApplyInPlace( sourceImage );
Initial image:
Result image:
Enumeration of possible types of filling posterized areas.
Fill area with minimum color's value.
Fill area with maximum color's value.
Fill area with average color's value.
Posterization interval, which specifies size of posterization areas.
The property specifies size of adjacent posterization areas
for each color plane. The value has direct effect on the amount of colors
in the result image. For example, if grayscale image is posterized with posterization
interval equal to 64, then result image will contain maximum of 4 tones. If color
image is posterized with same posterization interval, then it will contain maximum
of 43=64 colors.
Default value is set to 64.
Posterization filling type.
The property controls the color, which is used to substitute
colors within the same posterization interval - minimum, maximum or average value.
Default value is set to .
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Specifies filling type of posterization areas.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Blur filter.
The filter performs convolution filter using
the blur kernel:
1 2 3 2 1
2 4 5 4 2
3 5 6 5 3
2 4 5 4 2
1 2 3 2 1
For the list of supported pixel formats, see the documentation to
filter.
By default this filter sets property to
, so the alpha channel of 32 bpp and 64 bpp images is blurred as well.
Sample usage:
// create filter
Blur filter = new Blur( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Convolution filter.
The filter implements convolution operator, which calculates each pixel
of the result image as weighted sum of the correspond pixel and its neighbors in the source
image. The weights are set by convolution kernel. The weighted
sum is divided by before putting it into result image and also
may be thresholded using value.
Convolution is a simple mathematical operation which is fundamental to many common
image processing filters. Depending on the type of provided kernel, the filter may produce
different results, like blur image, sharpen it, find edges, etc.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing. Note: depending on the value of
property, the alpha channel is either copied as is or processed with the kernel.
Sample usage:
// define emboss kernel
int[,] kernel = {
{ -2, -1, 0 },
{ -1, 1, 1 },
{ 0, 1, 2 } };
// create filter
Convolution filter = new Convolution( kernel );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Convolution kernel.
Convolution kernel must be square and its width/height
should be odd and should be in the [3, 99] range.
Setting convolution kernel through this property does not
affect - it is not recalculated automatically.
Invalid kernel size is specified.
Division factor.
The value is used to divide convolution - weighted sum
of pixels is divided by this value.
The value may be calculated automatically in the case if constructor
with one parameter is used ().
Divisor can not be equal to zero.
Threshold to add to weighted sum.
The property specifies threshold value, which is added to each weighted
sum of pixels. The value is added right after division was done by
value.
Default value is set to 0.
Use dynamic divisor for edges or not.
The property specifies how to handle edges. If it is set to
, then the same divisor (which is specified by
property or calculated automatically) will be applied both for non-edge regions
and for edge regions. If the value is set to , then dynamically
calculated divisor will be used for edge regions, which is sum of those kernel
elements, which are taken into account for particular processed pixel
(elements, which are not outside image).
Default value is set to .
Specifies if alpha channel must be processed or just copied.
The property specifies the way how alpha channel is handled for 32 bpp
and 64 bpp images. If the property is set to , then alpha
channel's values are just copied as is. If the property is set to
then alpha channel is convolved using the specified kernel same way as RGB channels.
Default value is set to .
Initializes a new instance of the class.
Initializes a new instance of the class.
Convolution kernel.
Using this constructor (specifying only convolution kernel),
division factor will be calculated automatically
summing all kernel values. In the case if kernel's sum equals to zero,
division factor will be assigned to 1.
Invalid kernel size is specified. Kernel must be
square, its width/height should be odd and should be in the [3, 25] range.
Initializes a new instance of the class.
Convolution kernel.
Divisor, used used to divide weighted sum.
Invalid kernel size is specified. Kernel must be
square, its width/height should be odd and should be in the [3, 25] range.
Divisor can not be equal to zero.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Simple edge detector.
The filter performs convolution filter using
the edges kernel:
0 -1 0
-1 4 -1
0 -1 0
For the list of supported pixel formats, see the documentation to
filter.
Sample usage:
// create filter
Edges filter = new Edges( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Gaussian blur filter.
The filter performs convolution filter using
the kernel, which is calculate with the help of
method and then converted to integer kernel by dividing all elements by the element with the
smallest value. Using the kernel the convolution filter is known as Gaussian blur.
Using property it is possible to configure
sigma value of Gaussian function.
For the list of supported pixel formats, see the documentation to
filter.
By default this filter sets property to
, so the alpha channel of 32 bpp and 64 bpp images is blurred as well.
Sample usage:
// create filter with kernel size equal to 11
// and Gaussia sigma value equal to 4.0
GaussianBlur filter = new GaussianBlur( 4, 11 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Gaussian sigma value, [0.5, 5.0].
Sigma value for Gaussian function used to calculate
the kernel.
Default value is set to 1.4.
Kernel size, [3, 21].
Size of Gaussian kernel.
Default value is set to 5.
Initializes a new instance of the class.
Initializes a new instance of the class.
Gaussian sigma value.
Initializes a new instance of the class.
Gaussian sigma value.
Kernel size.
Mean filter.
The filter performs each pixel value's averaging with its 8 neighbors, which is
convolution filter using the mean kernel:
1 1 1
1 1 1
1 1 1
For the list of supported pixel formats, see the documentation to
filter.
With the above kernel the convolution filter is just calculates each pixel's value
in result image as average of 9 corresponding pixels in the source image.
By default this filter sets property to
, so the alpha channel of 32 bpp and 64 bpp images is blurred as well.
Sample usage:
// create filter
Mean filter = new Mean( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Sharpen filter
The filter performs convolution filter using
the sharpen kernel:
0 -1 0
-1 5 -1
0 -1 0
For the list of supported pixel formats, see the documentation to
filter.
Sample usage:
// create filter
Sharpen filter = new Sharpen( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Initializes a new instance of the class.
Gaussian sharpen filter.
The filter performs convolution filter using
the kernel, which is calculate with the help of
method and then converted to integer sharpening kernel. First of all the integer kernel
is calculated from by dividing all elements by
the element with the smallest value. Then the integer kernel is converted to sharpen kernel by
negating all kernel's elements (multiplying with -1), but the central kernel's element
is calculated as 2 * sum - centralElement, where sum is the sum off elements
in the integer kernel before negating.
For the list of supported pixel formats, see the documentation to
filter.
Sample usage:
// create filter with kernel size equal to 11
// and Gaussia sigma value equal to 4.0
GaussianSharpen filter = new GaussianSharpen( 4, 11 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Gaussian sigma value, [0.5, 5.0].
Sigma value for Gaussian function used to calculate
the kernel.
Default value is set to 1.4.
Kernel size, [3, 5].
Size of Gaussian kernel.
Default value is set to 5.
Initializes a new instance of the class.
Initializes a new instance of the class.
Gaussian sigma value.
Initializes a new instance of the class.
Gaussian sigma value.
Kernel size.
Canny edge detector.
The filter searches for objects' edges by applying Canny edge detector.
The implementation follows
Bill Green's Canny edge detection tutorial.
The implemented canny edge detector has one difference with the above linked algorithm.
The difference is in hysteresis step, which is a bit simplified (getting faster as a result). On the
hysteresis step each pixel is compared with two threshold values: and
. If pixel's value is greater or equal to , then
it is kept as edge pixel. If pixel's value is greater or equal to , then
it is kept as edge pixel only if there is at least one neighbouring pixel (8 neighbours are checked) which
has value greater or equal to ; otherwise it is none edge pixel. In the case
if pixel's value is less than , then it is marked as none edge immediately.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
CannyEdgeDetector filter = new CannyEdgeDetector( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Low threshold.
Low threshold value used for hysteresis
(see tutorial
for more information).
Default value is set to 20.
High threshold.
High threshold value used for hysteresis
(see tutorial
for more information).
Default value is set to 100.
Gaussian sigma.
Sigma value for Gaussian bluring.
Gaussian size.
Size of Gaussian kernel.
Initializes a new instance of the class.
Initializes a new instance of the class.
Low threshold.
High threshold.
Initializes a new instance of the class.
Low threshold.
High threshold.
Gaussian sigma.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Difference edge detector.
The filter finds objects' edges by calculating maximum difference
between pixels in 4 directions around the processing pixel.
Suppose 3x3 square element of the source image (x - is currently processed
pixel):
P1 P2 P3
P8 x P4
P7 P6 P5
The corresponding pixel of the result image equals to:
max( |P1-P5|, |P2-P6|, |P3-P7|, |P4-P8| )
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
DifferenceEdgeDetector filter = new DifferenceEdgeDetector( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Homogenity edge detector.
The filter finds objects' edges by calculating maximum difference
of processing pixel with neighboring pixels in 8 direction.
Suppose 3x3 square element of the source image (x - is currently processed
pixel):
P1 P2 P3
P8 x P4
P7 P6 P5
The corresponding pixel of the result image equals to:
max( |x-P1|, |x-P2|, |x-P3|, |x-P4|,
|x-P5|, |x-P6|, |x-P7|, |x-P8| )
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
HomogenityEdgeDetector filter = new HomogenityEdgeDetector( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Sobel edge detector.
The filter searches for objects' edges by applying Sobel operator.
Each pixel of the result image is calculated as approximated absolute gradient
magnitude for corresponding pixel of the source image:
|G| = |Gx| + |Gy] ,
where Gx and Gy are calculate utilizing Sobel convolution kernels:
Gx Gy
-1 0 +1 +1 +2 +1
-2 0 +2 0 0 0
-1 0 +1 -1 -2 -1
Using the above kernel the approximated magnitude for pixel x is calculate using
the next equation:
P1 P2 P3
P8 x P4
P7 P6 P5
|G| = |P1 + 2P2 + P3 - P7 - 2P6 - P5| +
|P3 + 2P4 + P5 - P1 - 2P8 - P7|
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
SobelEdgeDetector filter = new SobelEdgeDetector( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Scale intensity or not.
The property determines if edges' pixels intensities of the result image
should be scaled in the range of the lowest and the highest possible intensity
values.
Default value is set to .
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Filter iterator.
Filter iterator performs specified amount of filter's iterations.
The filter take the specified base filter and applies it
to source image specified amount of times.
The filter itself does not have any restrictions to pixel format of source
image. This is set by base filter.
The filter does image processing using only
interface of the specified base filter. This means
that this filter may not utilize all potential features of the base filter, like
in-place processing (see ) and region based processing
(see ). To utilize those features, it is required to
do filter's iteration manually.
Sample usage (morphological thinning):
// create filter sequence
FiltersSequence filterSequence = new FiltersSequence( );
// add 8 thinning filters with different structuring elements
filterSequence.Add( new HitAndMiss(
new short [,] { { 0, 0, 0 }, { -1, 1, -1 }, { 1, 1, 1 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { -1, 0, 0 }, { 1, 1, 0 }, { -1, 1, -1 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { 1, -1, 0 }, { 1, 1, 0 }, { 1, -1, 0 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { -1, 1, -1 }, { 1, 1, 0 }, { -1, 0, 0 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { 1, 1, 1 }, { -1, 1, -1 }, { 0, 0, 0 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { -1, 1, -1 }, { 0, 1, 1 }, { 0, 0, -1 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { 0, -1, 1 }, { 0, 1, 1 }, { 0, -1, 1 } },
HitAndMiss.Modes.Thinning ) );
filterSequence.Add( new HitAndMiss(
new short [,] { { 0, 0, -1 }, { 0, 1, 1 }, { -1, 1, -1 } },
HitAndMiss.Modes.Thinning ) );
// create filter iterator for 10 iterations
FilterIterator filter = new FilterIterator( filterSequence, 10 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
The filter provides format translation dictionary taken from
filter.
Base filter.
The base filter is the filter to be applied specified amount of iterations to
a specified image.
Iterations amount, [1, 255].
The amount of times to apply specified filter to a specified image.
Default value is set to 1.
Initializes a new instance of the class.
Filter to iterate.
Initializes a new instance of the class.
Filter to iterate.
Iterations amount.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Filters' collection to apply to an image in sequence.
The class represents collection of filters, which need to be applied
to an image in sequence. Using the class user may specify set of filters, which will
be applied to source image one by one in the order user defines them.
The class itself does not define which pixel formats are accepted for the source
image and which pixel formats may be produced by the filter. Format of acceptable source
and possible output is defined by filters, which added to the sequence.
Sample usage:
// create filter, which is binarization sequence
FiltersSequence filter = new FiltersSequence(
new GrayscaleBT709( ),
new Threshold( )
);
// apply the filter
Bitmap newImage = filter.Apply( image );
Initializes a new instance of the class.
Initializes a new instance of the class.
Sequence of filters to apply.
Get filter at the specified index.
Index of filter to get.
Returns filter at specified index.
Add new filter to the sequence.
Filter to add to the sequence.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
No filters were added into the filters' sequence.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
No filters were added into the filters' sequence.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
No filters were added into the filters' sequence.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have width, height and pixel format as it is expected by
the final filter in the sequence.
No filters were added into the filters' sequence.
Flood filling with specified color starting from specified point.
The filter performs image's area filling (4 directional) starting
from the specified point. It fills
the area of the pointed color, but also fills other colors, which
are similar to the pointed within specified tolerance.
The area is filled using specified fill color.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter
PointedColorFloodFill filter = new PointedColorFloodFill( );
// configure the filter
filter.Tolerance = Color.FromArgb( 150, 92, 92 );
filter.FillColor = Color.FromArgb( 255, 255, 255 );
filter.StartingPoint = new IntPoint( 150, 100 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Flood fill tolerance.
The tolerance value determines which colors to fill. If the
value is set to 0, then only color of the pointed pixel
is filled. If the value is not 0, then other colors may be filled as well,
which are similar to the color of the pointed pixel within the specified
tolerance.
The tolerance value is specified as ,
where each component (R, G and B) represents tolerance for the corresponding
component of color. This allows to set different tolerances for red, green
and blue components.
Fill color.
The fill color is used to fill image's area starting from the
specified point.
For grayscale images the color needs to be specified with all three
RGB values set to the same value, (128, 128, 128) for example.
Default value is set to black.
Point to start filling from.
The property allows to set the starting point, where filling is
started from.
Default value is set to (0, 0).
Initializes a new instance of the class.
Initializes a new instance of the class.
Fill color.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Flood filling with mean color starting from specified point.
The filter performs image's area filling (4 directional) starting
from the specified point. It fills
the area of the pointed color, but also fills other colors, which
are similar to the pointed within specified tolerance.
The area is filled using its mean color.
The filter is similar to filter, but instead
of filling the are with specified color, it fills the area with its mean color. This means
that this is a two pass filter - first pass is to calculate the mean value and the second pass is to
fill the area. Unlike to filter, this filter has nothing
to do in the case if zero tolerance is specified.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter
PointedMeanFloodFill filter = new PointedMeanFloodFill( );
// configre the filter
filter.Tolerance = Color.FromArgb( 150, 92, 92 );
filter.StartingPoint = new IntPoint( 150, 100 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
Flood fill tolerance.
The tolerance value determines the level of similarity between
colors to fill and the pointed color. If the value is set to zero, then the
filter does nothing, since the filling area contains only one color and its
filling with mean is meaningless.
The tolerance value is specified as ,
where each component (R, G and B) represents tolerance for the corresponding
component of color. This allows to set different tolerances for red, green
and blue components.
Default value is set to (16, 16, 16).
Point to start filling from.
The property allows to set the starting point, where filling is
started from.
Default value is set to (0, 0).
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Color filtering in HSL color space.
The filter operates in HSL color space and filters
pixels, which color is inside/outside of the specified HSL range -
it keeps pixels with colors inside/outside of the specified range and fills the
rest with specified color.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
HSLFiltering filter = new HSLFiltering( );
// set color ranges to keep
filter.Hue = new IntRange( 335, 0 );
filter.Saturation = new Range( 0.6f, 1 );
filter.Luminance = new Range( 0.1f, 1 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Sample usage with saturation update only:
// create filter
HSLFiltering filter = new HSLFiltering( );
// configure the filter
filter.Hue = new IntRange( 340, 20 );
filter.UpdateLuminance = false;
filter.UpdateHue = false;
// apply the filter
filter.ApplyInPlace( image );
Result image:
Format translations dictionary.
Range of hue component, [0, 359].
Because of hue values are cycled, the minimum value of the hue
range may have bigger integer value than the maximum value, for example [330, 30].
Range of saturation component, [0, 1].
Range of luminance component, [0, 1].
Fill color used to fill filtered pixels.
Determines, if pixels should be filled inside or outside specified
color range.
Default value is set to , which means
the filter removes colors outside of the specified range.
Determines, if hue value of filtered pixels should be updated.
The property specifies if hue of filtered pixels should be
updated with value from fill color or not.
Default value is set to .
Determines, if saturation value of filtered pixels should be updated.
The property specifies if saturation of filtered pixels should be
updated with value from fill color or not.
Default value is set to .
Determines, if luminance value of filtered pixels should be updated.
The property specifies if luminance of filtered pixels should be
updated with value from fill color or not.
Default value is set to .
Initializes a new instance of the class.
Initializes a new instance of the class.
Range of hue component.
Range of saturation component.
Range of luminance component.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Luminance and saturation linear correction.
The filter operates in HSL color space and provides
with the facility of luminance and saturation linear correction - mapping specified channels'
input ranges to specified output ranges.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
HSLLinear filter = new HSLLinear( );
// configure the filter
filter.InLuminance = new Range( 0, 0.85f );
filter.OutSaturation = new Range( 0.25f, 1 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Luminance input range.
Luminance component is measured in the range of [0, 1].
Luminance output range.
Luminance component is measured in the range of [0, 1].
Saturation input range.
Saturation component is measured in the range of [0, 1].
Saturation output range.
Saturation component is measured in the range of [0, 1].
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Hue modifier.
The filter operates in HSL color space and updates
pixels' hue values setting it to the specified value (luminance and
saturation are kept unchanged). The result of the filter looks like the image
is observed through a glass of the given color.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
HueModifier filter = new HueModifier( 180 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Hue value to set, [0, 359].
Default value is set to 0.
Initializes a new instance of the class.
Initializes a new instance of the class.
Hue value to set.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Saturation adjusting in HSL color space.
The filter operates in HSL color space and adjusts
pixels' saturation value, increasing it or decreasing by specified percentage.
The filters is based on filter, passing work to it after
recalculating saturation adjust value to input/output
ranges of the filter.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
SaturationCorrection filter = new SaturationCorrection( -0.5f );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Saturation adjust value, [-1, 1].
Default value is set to 0.1, which corresponds to increasing
saturation by 10%.
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Saturation adjust value.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Image processing filter interface.
The interface defines the set of methods, which should be
provided by all image processing filters. Methods of this interface
keep the source image unchanged and returt the result of image processing
filter as new image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Apply filter to an image.
Image in unmanaged memory.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Apply filter to an image.
Source image to be processed.
Destination image to store filter's result.
The method keeps the source image unchanged and puts the
the result of image processing filter into destination image.
The destination image must have the size, which is expected by
the filter.
In the case if destination image has incorrect
size.
Interface which provides information about image processing filter.
The interface defines set of properties, which provide different type
of information about image processing filters implementing interface
or another filter's interface.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
Keys of this dictionary defines all pixel formats which are supported for source
images, but corresponding values define what will be resulting pixel format. For
example, if value Format16bppGrayScale
is put into the dictionary with the
Format48bppRgb key, then it means
that the filter accepts color 48 bpp image and produces 16 bpp grayscale image as a result
of image processing.
The information provided by this property is mostly actual for filters, which can not
be applied directly to the source image, but provide new image a result. Since usually all
filters implement interface, the information provided by this property
(if filter also implements interface) may be useful to
user to resolve filter's capabilities.
Sample usage:
// get filter's IFilterInformation interface
IFilterInformation info = (IFilterInformation) filter;
// check if the filter supports our image's format
if ( info.FormatTranslations.ContainsKey( image.PixelFormat )
{
// format is supported, check what will be result of image processing
PixelFormat resultingFormat = info.FormatTranslations[image.PixelFormat];
}
///
In-place filter interface.
The interface defines the set of methods, which should be
implemented by filters, which are capable to do image processing
directly on the source image. Not all image processing filters
can be applied directly to the source image - only filters, which do not
change image's dimension and pixel format, can be applied directly to the
source image.
Apply filter to an image.
Image to apply filter to.
The method applies filter directly to the provided image data.
Apply filter to an image.
Image to apply filter to.
The method applies filter directly to the provided image data.
Apply filter to an image in unmanaged memory.
Image in unmanaged memory.
The method applies filter directly to the provided image data.
In-place partial filter interface.
The interface defines the set of methods, which should be
implemented by filters, which are capable to do image processing
directly on the source image. Not all image processing filters
can be applied directly to the source image - only filters, which do not
change image dimension and pixel format, can be applied directly to the
source image.
The interface also supports partial image filtering, allowing to specify
image rectangle, which should be filtered.
Apply filter to an image or its part.
Image to apply filter to.
Image rectangle for processing by filter.
The method applies filter directly to the provided image data.
Apply filter to an image or its part.
Image to apply filter to.
Image rectangle for processing by filter.
The method applies filter directly to the provided image data.
Apply filter to an image in unmanaged memory.
Image in unmanaged memory.
Image rectangle for processing by filter.
The method applies filter directly to the provided image.
Flat field correction filter.
The goal of flat-field correction is to remove artifacts from 2-D images that
are caused by variations in the pixel-to-pixel sensitivity of the detector and/or by distortions
in the optical path. The filter requires two images for the input - source image, which represents
acquisition of some objects (using microscope, for example), and background image, which is taken
without any objects presented. The source image is corrected using the formula: src = bgMean * src / bg,
where src - source image's pixel value, bg - background image's pixel value, bgMean - mean
value of background image.
If background image is not provided, then it will be automatically generated on each filter run
from source image. The automatically generated background image is produced running Gaussian Blur on the
original image with (sigma value is set to 5, kernel size is set to 21). Before blurring the original image
is resized to 1/3 of its original size and then the result of blurring is resized back to the original size.
The class processes only grayscale (8 bpp indexed) and color (24 bpp) images.
Sample usage:
// create filter
FlatFieldCorrection filter = new FlatFieldCorrection( bgImage );
// process image
filter.ApplyInPlace( sourceImage );
Source image:
Background image:
Result image:
Background image used for flat field correction.
The property sets the background image (without any objects), which will be used
for illumination correction of an image passed to the filter.
The background image must have the same size and pixel format as source image.
Otherwise exception will be generated when filter is applied to source image.
Setting this property will clear the property -
only one background image is allowed: managed or unmanaged.
Background image used for flat field correction.
The property sets the background image (without any objects), which will be used
for illumination correction of an image passed to the filter.
The background image must have the same size and pixel format as source image.
Otherwise exception will be generated when filter is applied to source image.
Setting this property will clear the property -
only one background image is allowed: managed or unmanaged.
Format translations dictionary.
See for more information.
Initializes a new instance of the class.
This constructor does not set background image, which means that background
image will be generated on the fly on each filter run. The automatically generated background
image is produced running Gaussian Blur on the original image with (sigma value is set to 5,
kernel size is set to 21). Before blurring the original image is resized to 1/3 of its original size
and then the result of blurring is resized back to the original size.
Initializes a new instance of the class.
Background image used for flat field correction.
Process the filter on the specified image.
Source image data.
Bottop-hat operator from Mathematical Morphology.
Bottom-hat morphological operator subtracts
input image from the result of morphological closing on the
the input image.
Applied to binary image, the filter allows to get all object parts, which were
added by closing filter, but were not removed after that due
to formed connections/fillings.
The filter accepts 8 and 16 bpp grayscale images and 24 and 48 bpp
color images for processing.
Sample usage:
// create filter
BottomHat filter = new BottomHat( );
// apply the filter
filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Structuring element to pass to operator.
Process the filter on the specified image.
Source image data.
Closing operator from Mathematical Morphology.
Closing morphology operator equals to ditation followed
by erosion.
Applied to binary image, the filter may be used connect or fill objects. Since dilation is used
first, it may connect/fill object areas. Then erosion restores objects. But since dilation may connect
something before, erosion may not remove after that because of the formed connection.
See documentation to and classes for more
information and list of supported pixel formats.
Sample usage:
// create filter
Closing filter = new Closing( );
// apply the filter
filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes new instance of the class using
default structuring element for both and
classes - 3x3 structuring element with all elements equal to 1.
Initializes a new instance of the class.
Structuring element.
See documentation to and
classes for information about structuring element constraints.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Apply filter to an image.
Image to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image.
Image data to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image.
Unmanaged image to apply filter to.
The method applies the filter directly to the provided source unmanaged image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image data to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image or its part.
Unmanaged image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
dilation operator from Mathematical Morphology.
The filter assigns maximum value of surrounding pixels to each pixel of
the result image. Surrounding pixels, which should be processed, are specified by
structuring element: 1 - to process the neighbor, -1 - to skip it.
The filter especially useful for binary image processing, where it allows to grow
separate objects or join objects.
For processing image with 3x3 structuring element, there are different optimizations
available, like and .
The filter accepts 8 and 16 bpp grayscale images and 24 and 48 bpp
color images for processing.
Sample usage:
// create filter
dilation filter = new dilation( );
// apply the filter
filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes new instance of the class using
default structuring element - 3x3 structuring element with all elements equal to 1.
Initializes a new instance of the class.
Structuring element.
Structuring elemement for the dilation morphological operator
must be square matrix with odd size in the range of [3, 99].
Invalid size of structuring element.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Erosion operator from Mathematical Morphology.
The filter assigns minimum value of surrounding pixels to each pixel of
the result image. Surrounding pixels, which should be processed, are specified by
structuring element: 1 - to process the neighbor, -1 - to skip it.
The filter especially useful for binary image processing, where it removes pixels, which
are not surrounded by specified amount of neighbors. It gives ability to remove noisy pixels
(stand-alone pixels) or shrink objects.
For processing image with 3x3 structuring element, there are different optimizations
available, like and .
The filter accepts 8 and 16 bpp grayscale images and 24 and 48 bpp
color images for processing.
Sample usage:
// create filter
Erosion filter = new Erosion( );
// apply the filter
filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes new instance of the class using
default structuring element - 3x3 structuring element with all elements equal to 1.
Initializes a new instance of the class.
Structuring element.
Structuring elemement for the erosion morphological operator
must be square matrix with odd size in the range of [3, 99].
Invalid size of structuring element.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Hit-And-Miss operator from Mathematical Morphology.
The hit-and-miss filter represents generalization of
and filters by extending flexibility of structuring element and
providing different modes of its work. Structuring element may contain:
- 1 - foreground;
- 0 - background;
- -1 - don't care.
Filter's mode is set by property. The list of modes and its
documentation may be found in enumeration.
The filter accepts 8 bpp grayscale images for processing. Note: grayscale images are treated
as binary with 0 value equals to black and 255 value equals to white.
Sample usage:
// define kernel to remove pixels on the right side of objects
// (pixel is removed, if there is white pixel on the left and
// black pixel on the right)
short[,] se = new short[,] {
{ -1, -1, -1 },
{ 1, 1, 0 },
{ -1, -1, -1 }
};
// create filter
HitAndMiss filter = new HitAndMiss( se, HitAndMiss.Modes.Thinning );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Hit and Miss modes.
Bellow is a list of modes meaning depending on pixel's correspondence
to specified structuring element:
- - on match pixel is set to white, otherwise to black;
- - on match pixel is set to black, otherwise not changed.
- - on match pixel is set to white, otherwise not changed.
Hit and miss mode.
Thinning mode.
Thickening mode.
Format translations dictionary.
Operation mode.
Mode to use for the filter. See enumeration
for the list of available modes and their documentation.
Default mode is set to .
Initializes a new instance of the class.
Structuring element.
Structuring elemement for the hit-and-miss morphological operator
must be square matrix with odd size in the range of [3, 99].
Invalid size of structuring element.
Initializes a new instance of the class.
Structuring element.
Operation mode.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Opening operator from Mathematical Morphology.
Opening morphology operator equals to erosion followed
by dilation.
Applied to binary image, the filter may be used for removing small object keeping big objects
unchanged. Since erosion is used first, it removes all small objects. Then dilation restores big
objects, which were not removed by erosion.
See documentation to and classes for more
information and list of supported pixel formats.
Sample usage:
// create filter
Opening filter = new Opening( );
// apply the filter
filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes new instance of the class using
default structuring element for both and
classes - 3x3 structuring element with all elements equal to 1.
Initializes a new instance of the class.
Structuring element.
See documentation to and
classes for information about structuring element constraints.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image.
Source image to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The filter accepts bitmap data as input and returns the result
of image processing filter as new image. The source image data are kept
unchanged.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Returns filter's result obtained by applying the filter to
the source image.
The method keeps the source image unchanged and returns
the result of image processing filter as new image.
Unsupported pixel format of the source image.
Apply filter to an image in unmanaged memory.
Source image in unmanaged memory to apply filter to.
Destination image in unmanaged memory to put result into.
The method keeps the source image unchanged and puts result of image processing
into destination image.
The destination image must have the same width and height as source image. Also
destination image must have pixel format, which is expected by particular filter (see
property for information about pixel format conversions).
Unsupported pixel format of the source image.
Incorrect destination pixel format.
Destination image has wrong width and/or height.
Apply filter to an image.
Image to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image.
Image data to apply filter to.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image.
Unmanaged image to apply filter to.
The method applies the filter directly to the provided source unmanaged image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an image or its part.
Image data to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Apply filter to an unmanaged image or its part.
Unmanaged image to apply filter to.
Image rectangle for processing by the filter.
The method applies the filter directly to the provided source image.
Unsupported pixel format of the source image.
Binary dilation operator from Mathematical Morphology with 3x3 structuring element.
The filter represents an optimized version of
filter, which is aimed for binary images (containing black and white pixels) processed
with 3x3 structuring element. This makes this filter ideal for growing objects in binary
images – it puts white pixel to the destination image in the case if there is at least
one white neighbouring pixel in the source image.
See filter, which represents generic version of
dilation filter supporting custom structuring elements and wider range of image formats.
The filter accepts 8 bpp grayscale (binary) images for processing.
Binarized image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Processing rectangle mast be at least 3x3 in size.
Binary erosion operator from Mathematical Morphology with 3x3 structuring element.
The filter represents an optimized version of
filter, which is aimed for binary images (containing black and white pixels) processed
with 3x3 structuring element. This makes this filter ideal for removing noise in binary
images – it removes all white pixels, which are neighbouring with at least one blank pixel.
See filter, which represents generic version of
erosion filter supporting custom structuring elements and wider range of image formats.
The filter accepts 8 bpp grayscale (binary) images for processing.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Processing rectangle mast be at least 3x3 in size.
dilation operator from Mathematical Morphology with 3x3 structuring element.
The filter represents an optimized version of
filter, which is aimed for grayscale image processing with 3x3 structuring element.
See filter, which represents generic version of
dilation filter supporting custom structuring elements and wider range of image formats.
The filter accepts 8 bpp grayscale images for processing.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Processing rectangle mast be at least 3x3 in size.
Erosion operator from Mathematical Morphology with 3x3 structuring element.
The filter represents an optimized version of
filter, which is aimed for grayscale image processing with 3x3 structuring element.
See filter, which represents generic version of
erosion filter supporting custom structuring elements and wider range of image formats.
The filter accepts 8 bpp grayscale images for processing.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Processing rectangle mast be at least 3x3 in size.
Top-hat operator from Mathematical Morphology.
Top-hat morphological operator subtracts
result of morphological opening on the input image
from the input image itself.
Applied to binary image, the filter allows to get all those object (their parts)
which were removed by opening filter, but never restored.
The filter accepts 8 and 16 bpp grayscale images and 24 and 48 bpp
color images for processing.
Sample usage:
// create filter
TopHat filter = new TopHat( );
// apply the filter
filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
Structuring element to pass to operator.
Process the filter on the specified image.
Source image data.
Additive noise filter.
The filter adds random value to each pixel of the source image.
The distribution of random values can be specified by random generator.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create random generator
IRandomNumberGenerator generator = new UniformGenerator( new Range( -50, 50 ) );
// create filter
AdditiveNoise filter = new AdditiveNoise( generator );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Random number generator used to add noise.
Default generator is uniform generator in the range of (-10, 10).
Initializes a new instance of the class.
Initializes a new instance of the class.
Random number generator used to add noise.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Salt and pepper noise.
The filter adds random salt and pepper noise - sets
maximum or minimum values to randomly selected pixels.
The filter accepts 8 bpp grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// create filter
SaltAndPepperNoise filter = new SaltAndPepperNoise( 10 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Amount of noise to generate in percents, [0, 100].
Initializes a new instance of the class.
Initializes a new instance of the class.
Amount of noise to generate in percents, [0, 100].
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Extract normalized RGB channel from color image.
Extracts specified normalized RGB channel of color image and returns
it as grayscale image.
Normalized RGB color space is defined as:
r = R / (R + G + B ),
g = G / (R + G + B ),
b = B / (R + G + B ),
where R, G and B are components of RGB color space and
r, g and b are components of normalized RGB color space.
The filter accepts 24, 32, 48 and 64 bpp color images and produces
8 (if source is 24 or 32 bpp image) or 16 (if source is 48 or 64 bpp image)
bpp grayscale image.
Sample usage:
// create filter
ExtractNormalizedRGBChannel filter = new ExtractNormalizedRGBChannel( RGB.G );
// apply the filter
Bitmap channelImage = filter.Apply( image );
Format translations dictionary.
Normalized RGB channel to extract.
Default value is set to .
Invalid channel is specified.
Initializes a new instance of the class.
Initializes a new instance of the class.
Normalized RGB channel to extract.
Process the filter on the specified image.
Source image data.
Destination image data.
Apply mask to the specified image.
The filter applies mask to the specified image - keeps all pixels
in the image if corresponding pixels/values of the mask are not equal to 0. For all
0 pixels/values in mask, corresponding pixels in the source image are set to 0.
Mask can be specified as .NET's managed Bitmap, as
UnmanagedImage or as byte array.
In the case if mask is specified as image, it must be 8 bpp grayscale image. In all case
mask size must be the same as size of the image to process.
The filter accepts 8/16 bpp grayscale and 24/32/48/64 bpp color images for processing.
Mask image to apply.
The property specifies mask image to use. The image must be grayscale
(8 bpp format) and have the same size as the source image to process.
When the property is set, both and
properties are set to .
The mask image must be 8 bpp grayscale image.
Unmanaged mask image to apply.
The property specifies unmanaged mask image to use. The image must be grayscale
(8 bpp format) and have the same size as the source image to process.
When the property is set, both and
properties are set to .
The mask image must be 8 bpp grayscale image.
Mask to apply.
The property specifies mask array to use. Size of the array must
be the same size as the size of the source image to process - its 0th dimension
must be equal to image's height and its 1st dimension must be equal to width. For
example, for 640x480 image, the mask array must be defined as:
byte[,] mask = new byte[480, 640];
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Mask image to use.
Initializes a new instance of the class.
Unmanaged mask image to use.
Initializes a new instance of the class.
to use.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
None of the possible mask properties were set. Need to provide mask before applying the filter.
Invalid size of provided mask. Its size must be the same as the size of the image to mask.
Watershed filter.
In the study of image processing, a watershed is a transformation defined on a grayscale image.
The name refers metaphorically to a geological watershed, or drainage divide, which separates
adjacent drainage basins. The watershed transformation treats the image it operates upon like a
topographic map, with the brightness of each point representing its height, and finds the lines
that run along the tops of ridges.
There are different technical definitions of a watershed. In graphs, watershed lines may be
defined on the nodes, on the edges, or hybrid lines on both nodes and edges. Watersheds may
also be defined in the continuous domain.[1] There are also many different algorithms to compute
watersheds. Watershed algorithm is used in image processing primarily for segmentation purposes.
References:
-
Wikipedia contributors. "Watershed (image processing)." Wikipedia, The Free Encyclopedia.
Available on: https://en.wikipedia.org/wiki/Watershed_(image_processing)
Bitmap input = ...
// Apply the transform
var dt = new BinaryWatershed();
Bitmap output = dt.Apply(input);
// Show results on screen
ImageBox.Show("input", input);
ImageBox.Show("output", output);
// Mark points using PointsMarker
var marker = new PointsMarker(Color.Red, 5)
{
Points = bw.MaxPoints
};
Bitmap marked = marker.Apply(result);
ImageBox.Show("markers", marked);
Gets the list of maximum points found in the image.
Gets or sets the tolerance. Default is 0.5f.
Gets or sets the distance method to be used in the
underlying .
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
The tolerance. Default is 0.5f.
Initializes a new instance of the class.
The tolerance. Default is 0.5f.
The distance method.
Initializes a new instance of the class.
The distance method.
Processes the filter.
The image.
Distance functions that can be used with .
Chessboard distance.
Euclidean distance.
Manhattan distance.
Squared Euclidean distance.
Distance transform filter.
A distance transform, also known as distance map or distance field, is a derived
representation of a digital image.The choice of the term depends on the point of
view on the object in question: whether the initial image is transformed into another
representation, or it is simply endowed with an additional map or field.
Distance fields can also be signed, in the case where it is important to distinguish whether
the point is inside or outside of the shape. The map labels each pixel of the image with
the distance to the nearest obstacle pixel. A most common type of obstacle pixel is a boundary
pixel in a binary image.See the image for an example of a chessboard distance transform
on a binary image.
Usually the transform/map is qualified with the chosen metric.For example, one may
speak of distance transform, if the
underlying metric is Manhattan distance. Common metrics are:
The Euclidean distance; the Taxicab
geometry, also known as City block distance or Manhattan
distance; and the Chessboard distance.
References:
-
Wikipedia contributors. "Distance transform." Wikipedia, The Free Encyclopedia.
Available on: https://en.wikipedia.org/wiki/Distance_transform
Bitmap input = ...
// Apply the transform
DistanceTransform dt = new DistanceTransform();
Bitmap output = dt.Apply(input);
// Show results on screen
ImageBox.Show("input", input);
ImageBox.Show("output", output);
Format translations dictionary.
Gets the resulting pixels of the last transfomed image as a float[] array.
Initializes a new instance of the class.
Initializes a new instance of the class.
Gets the maximum distance from the transform.
Gets the ultimate eroded point.
Process the filter on the specified image.
Source image data.
Blobs filtering by size.
The filter performs filtering of blobs by their size in the specified
source image - all blobs, which are smaller or bigger then specified limits, are
removed from the image.
The image processing filter treats all none black pixels as objects'
pixels and all black pixel as background.
The filter accepts 8 bpp grayscale images and 24/32
color images for processing.
Sample usage:
// create filter
BlobsFiltering filter = new BlobsFiltering( );
// configure filter
filter.CoupledSizeFiltering = true;
filter.MinWidth = 70;
filter.MinHeight = 70;
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Specifies if size filetering should be coupled or not.
See documentation for property
of class for more information.
Minimum allowed width of blob.
Minimum allowed height of blob.
Maximum allowed width of blob.
Maximum allowed height of blob.
Custom blobs' filter to use.
See for information
about custom blobs' filtering routine.
Initializes a new instance of the class.
Initializes a new instance of the class.
Minimum allowed width of blob.
Minimum allowed height of blob.
Maximum allowed width of blob.
Maximum allowed height of blob.
This constructor creates an instance of class
with property set to false.
Initializes a new instance of the class.
Minimum allowed width of blob.
Minimum allowed height of blob.
Maximum allowed width of blob.
Maximum allowed height of blob.
Specifies if size filetering should be coupled or not.
For information about coupled filtering mode see documentation for
property of
class.
Initializes a new instance of the class.
Custom blobs' filtering routine to use
(see ).
Process the filter on the specified image.
Source image data.
Fill areas outiside of specified region.
The filter fills areas outside of specified region using the specified color.
The filter accepts 8bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
CanvasCrop filter = new CanvasCrop( new Rectangle(
5, 5, image.Width - 10, image.Height - 10 ), Color.Red );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
RGB fill color.
The color is used to fill areas out of specified region in color images.
Default value is set to white - RGB(255, 255, 255).
Gray fill color.
The color is used to fill areas out of specified region in grayscale images.
Default value is set to white - 255.
Region to keep.
Pixels inside of the specified region will keep their values, but
pixels outside of the region will be filled with specified color.
Initializes a new instance of the class.
Region to keep.
Initializes a new instance of the class.
Region to keep.
RGB color to use for filling areas outside of specified region in color images.
Initializes a new instance of the class.
Region to keep.
Gray color to use for filling areas outside of specified region in grayscale images.
Initializes a new instance of the class.
Region to keep.
RGB color to use for filling areas outside of specified region in color images.
Gray color to use for filling areas outside of specified region in grayscale images.
Process the filter on the specified image.
Source image data.
Fill areas iniside of the specified region.
The filter fills areas inside of specified region using the specified color.
The filter accepts 8bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
CanvasFill filter = new CanvasFill( new Rectangle(
5, 5, image.Width - 10, image.Height - 10 ), Color.Red );
// apply the filter
filter.ApplyInPlace( image );
Format translations dictionary.
See
documentation for additional information.
RGB fill color.
The color is used to fill areas out of specified region in color images.
Default value is set to white - RGB(255, 255, 255).
Gray fill color.
The color is used to fill areas out of specified region in grayscale images.
Default value is set to white - 255.
Region to fill.
Pixels inside of the specified region will be filled with specified color.
Initializes a new instance of the class.
Region to fill.
Initializes a new instance of the class.
Region to fill.
RGB color to use for filling areas inside of specified region in color images.
Initializes a new instance of the class.
Region to fill.
Gray color to use for filling areas inside of specified region in grayscale images.
Initializes a new instance of the class.
Region to fill.
RGB color to use for filling areas inside of specified region in color images.
Gray color to use for filling areas inside of specified region in grayscale images.
Process the filter on the specified image.
Source image data.
Move canvas to the specified point.
The filter moves canvas to the specified area filling unused empty areas with specified color.
The filter accepts 8/16 bpp grayscale images and 24/32/48/64 bpp color image
for processing.
Sample usage:
// create filter
CanvasMove filter = new CanvasMove( new IntPoint( -50, -50 ), Color.Green );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
RGB fill color.
The color is used to fill empty areas in color images.
Default value is set to white - ARGB(255, 255, 255, 255).
Gray fill color.
The color is used to fill empty areas in grayscale images.
Default value is set to white - 255.
Point to move the canvas to.
Initializes a new instance of the class.
Point to move the canvas to.
Initializes a new instance of the class.
Point to move the canvas.
RGB color to use for filling areas empty areas in color images.
Initializes a new instance of the class.
Point to move the canvas.
Gray color to use for filling empty areas in grayscale images.
Initializes a new instance of the class.
Point to move the canvas.
RGB color to use for filling areas empty areas in color images.
Gray color to use for filling empty areas in grayscale images.
Process the filter on the specified image.
Source image data.
Connected components labeling.
The filter performs labeling of objects in the source image. It colors
each separate object using different color. The image processing filter treats all none
black pixels as objects' pixels and all black pixel as background.
The filter accepts 8 bpp grayscale images and 24/32 bpp color images and produces
24 bpp RGB image.
Sample usage:
// create filter
var filter = new ConnectedComponentsLabeling();
// apply the filter
Bitmap newImage = filter.Apply(image);
// check objects count
int objectCount = filter.ObjectCount;
Initial image:
Result image:
Format translations dictionary.
Blob counter used to locate separate blobs.
The property allows to set blob counter to use for blobs' localization.
Default value is set to .
Colors used to color the binary image.
Specifies if blobs should be filtered.
See documentation for property
of class for more information.
Specifies if size filetering should be coupled or not.
See documentation for property
of class for more information.
Minimum allowed width of blob.
Minimum allowed height of blob.
Maximum allowed width of blob.
Maximum allowed height of blob.
Objects count.
The amount of objects found in the last processed image.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Filter to mark (highlight) corners of objects.
The filter highlights corners of objects on the image using provided corners
detection algorithm.
The filter accepts 8 bpp grayscale and 24/32 color images for processing.
Sample usage:
// create corner detector's instance
SusanCornersDetector scd = new SusanCornersDetector( );
// create corner maker filter
CornersMarker filter = new CornersMarker( scd, Color.Red );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Color used to mark corners.
Interface of corners' detection algorithm used to detect corners.
Initializes a new instance of the class.
Interface of corners' detection algorithm.
Initializes a new instance of the class.
Interface of corners' detection algorithm.
Marker's color used to mark corner.
Process the filter on the specified image.
Source image data.
Fill holes in objects in binary image.
The filter allows to fill black holes in white object in a binary image.
It is possible to specify maximum holes' size to fill using
and properties.
The filter accepts binary image only, which are represented as 8 bpp images.
Sample usage:
// create and configure the filter
FillHoles filter = new FillHoles( );
filter.MaxHoleHeight = 20;
filter.MaxHoleWidth = 20;
filter.CoupledSizeFiltering = false;
// apply the filter
Bitmap result = filter.Apply( image );
Initial image:
Result image:
Specifies if size filetering should be coupled or not.
In uncoupled filtering mode, holes are filled in the case if
their width is smaller than or equal to or height is smaller than
or equal to . But in coupled filtering mode, holes are filled only in
the case if both width and height are smaller or equal to the corresponding value.
Default value is set to , what means coupled filtering by size.
Maximum width of a hole to fill.
All holes, which have width greater than this value, are kept unfilled.
See for additional information.
Default value is set to .
Maximum height of a hole to fill.
All holes, which have height greater than this value, are kept unfilled.
See for additional information.
Default value is set to .
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Horizontal run length smoothing algorithm.
The class implements horizontal run length smoothing algorithm, which
is described in: K.Y. Wong, R.G. Casey and F.M. Wahl, "Document analysis system,"
IBM J. Res. Devel., Vol. 26, NO. 6,111). 647-656, 1982.
Unlike the original description of this algorithm, this implementation must be applied
to inverted binary images containing document, i.e. white text on black background. So this
implementation fills horizontal black gaps between white pixels.
This algorithm is usually used together with ,
and then further analysis of white blobs.
The filter accepts 8 bpp grayscale images, which are supposed to be binary inverted documents.
Sample usage:
// create filter
HorizontalRunLengthSmoothing hrls = new HorizontalRunLengthSmoothing( 32 );
// apply the filter
hrls.ApplyInPlace( image );
Source image:
Result image:
Maximum gap size to fill (in pixels).
The property specifies maximum horizontal gap between white pixels to fill.
If number of black pixels between some white pixels is bigger than this value, then those
black pixels are left as is; otherwise the gap is filled with white pixels.
Default value is set to 10. Minimum value is 1. Maximum value is 1000.
Process gaps between objects and image borders or not.
The property sets if gaps between image borders and objects must be treated as
gaps between objects and also filled.
Default value is set to .
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Initializes a new instance of the class.
Maximum gap size to fill (see ).
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Image warp effect filter.
The image processing filter implements a warping filter, which
sets pixels in destination image to values from source image taken with specified offset
(see ).
The filter accepts 8 bpp grayscale images and 24/32
color images for processing.
Sample usage:
// build warp map
int width = image.Width;
int height = image.Height;
IntPoint[,] warpMap = new IntPoint[height, width];
int size = 8;
int maxOffset = -size + 1;
for ( int y = 0; y < height; y++ )
{
for ( int x = 0; x < width; x++ )
{
int dx = ( x / size ) * size - x;
int dy = ( y / size ) * size - y;
if ( dx + dy <= maxOffset )
{
dx = ( x / size + 1 ) * size - 1 - x;
}
warpMap[y, x] = new IntPoint( dx, dy );
}
}
// create filter
ImageWarp filter = new ImageWarp( warpMap );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Map used for warping images.
The property sets displacement map used for warping images.
The map sets offsets of pixels in source image, which are used to set values in destination
image. In other words, each pixel in destination image is set to the same value
as pixel in source image with corresponding offset (coordinates of pixel in source image
are calculated as sum of destination coordinate and corresponding value from warp map).
The map array is accessed using [y, x] indexing, i.e.
first dimension in the map array corresponds to Y axis of image.
If the map is smaller or bigger than the image to process, then only minimum
overlapping area of the image is processed. This allows to prepare single big map and reuse
it for a set of images for creating similar effects.
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Map used for warping images (see ).
Process the filter on the specified image.
Source image data.
Destination image data.
Jitter filter.
The filter moves each pixel of a source image in
random direction within a window of specified radius.
The filter accepts 8 bpp grayscale images and 24/32
color images for processing.
Sample usage:
// create filter
Jitter filter = new Jitter( 4 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
Jittering radius, [1, 10]
Determines radius in which pixels can move.
Default value is set to 2.
Initializes a new instance of the class.
Initializes a new instance of the class.
Jittering radius.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Apply filter according to the specified mask.
The image processing routine applies the specified to
a source image according to the specified mask - if a pixel/value in the specified mask image/array
is set to 0, then the original pixel's value is kept; otherwise the pixel is filtered using the
specified base filter.
Mask can be specified as .NET's managed Bitmap, as
UnmanagedImage or as byte array.
In the case if mask is specified as image, it must be 8 bpp grayscale image. In all case
mask size must be the same as size of the image to process.
Pixel formats accepted by this filter are specified by the .
Sample usage:
// create the filter
MaskedFilter maskedFilter = new MaskedFilter( new Sepia( ), maskImage );
// apply the filter
maskedFilter.ApplyInPlace( image );
Initial image:
Mask image:
Result image:
Base filter to apply to the source image.
The property specifies base filter which is applied to the specified source
image (to all pixels which have corresponding none 0 value in mask image/array).
The base filter must implement interface.
The base filter must never change image's pixel format. For example, if source
image's pixel format is 24 bpp color image, then it must stay the same after the base
filter is applied.
The base filter must never change size of the source image.
Base filter can not be set to null.
The specified base filter must implement IFilterInformation interface.
The specified filter must never change pixel format.
Mask image to apply.
The property specifies mask image to use. The image must be grayscale
(8 bpp format) and have the same size as the source image to process.
When the property is set, both and
properties are set to .
The mask image must be 8 bpp grayscale image.
Unmanaged mask image to apply.
The property specifies unmanaged mask image to use. The image must be grayscale
(8 bpp format) and have the same size as the source image to process.
When the property is set, both and
properties are set to .
The mask image must be 8 bpp grayscale image.
Mask to apply.
The property specifies mask array to use. Size of the array must
be the same size as the size of the source image to process - its 0th dimension
must be equal to image's height and its 1st dimension must be equal to width. For
example, for 640x480 image, the mask array must be defined as:
byte[,] mask = new byte[480, 640];
Format translations dictionary.
See
documentation for additional information.
The property returns format translation table from the
.
Initializes a new instance of the class.
Base filter to apply to the specified source image.
Mask image to use.
Initializes a new instance of the class.
Base filter to apply to the specified source image.
Unmanaged mask image to use.
Initializes a new instance of the class.
Base filter to apply to the specified source image.
to use.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
None of the possible mask properties were set. Need to provide mask before applying the filter.
Invalid size of provided mask. Its size must be the same as the size of the image to mask.
Mirroring filter.
The filter mirrors image around X and/or Y axis (horizontal and vertical
mirroring).
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter
Mirror filter = new Mirror( false, true );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Specifies if mirroring should be done for X axis (horizontal mirroring).
Specifies if mirroring should be done for Y axis (vertical mirroring).
Initializes a new instance of the class.
Specifies if mirroring should be done for X axis.
Specifies if mirroring should be done for Y axis
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Oil painting filter.
Processing source image the filter changes each pixels' value
to the value of pixel with the most frequent intensity within window of the
specified size. Going through the window the filters
finds which intensity of pixels is the most frequent. Then it updates value
of the pixel in the center of the window to the value with the most frequent
intensity. The update procedure creates the effect of oil painting.
The filter accepts 8 bpp grayscale images and 24/32
color images for processing.
Sample usage:
// create filter
OilPainting filter = new OilPainting( 15 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
Brush size, [3, 21].
Window size to search for most frequent pixels' intensity.
Default value is set to 5.
Initializes a new instance of the class.
Initializes a new instance of the class.
Brush size.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Pixellate filter.
The filter processes an image creating the effect of an image with larger
pixels - pixellated image. The effect is achieved by filling image's rectangles of the
specified size by the color, which is mean color value for the corresponding rectangle.
The size of rectangles to process is set by and
properties.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter
Pixellate filter = new Pixellate( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Pixel width, [2, 32].
Default value is set to 8.
Pixel height, [2, 32].
Default value is set to 8.
Pixel size, [2, 32].
The property is used to set both and
simultaneously.
Initializes a new instance of the class.
Initializes a new instance of the class.
Pixel size.
Initializes a new instance of the class.
Pixel width.
Pixel height.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Simple skeletonization filter.
The filter build simple objects' skeletons by thinning them until
they have one pixel wide "bones" horizontally and vertically. The filter uses
and colors to distinguish
between object and background.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create filter
SimpleSkeletonization filter = new SimpleSkeletonization( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
Background pixel color.
The property sets background (none object) color to look for.
Default value is set to 0 - black.
Foreground pixel color.
The property sets objects' (none background) color to look for.
Default value is set to 255 - white.
Initializes a new instance of the class.
Initializes a new instance of the class.
Background pixel color.
Foreground pixel color.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Textured filter - filter an image using texture.
The filter is similar to filter in its
nature, but instead of working with source image and overly, it uses provided
filters to create images to merge (see and
properties). In addition, it uses a bit more complex formula for calculation
of destination pixel's value, which gives greater amount of flexibility:
dst = * ( src1 * textureValue + src2 * ( 1.0 - textureValue ) ) + * src2,
where src1 is value of pixel from the image produced by ,
src2 is value of pixel from the image produced by ,
dst is value of pixel in a destination image and textureValue is corresponding value
from provided texture (see or ).
It is possible to set to . In this case
original source image will be used instead of result produced by the second filter.
The filter 24 bpp color images for processing.
Sample usage #1:
// create filter
TexturedFilter filter = new TexturedFilter( new CloudsTexture( ),
new HueModifier( 50 ) );
// apply the filter
Bitmap newImage = filter.Apply( image );
Sample usage #2:
// create filter
TexturedFilter filter = new TexturedFilter( new CloudsTexture( ),
new GrayscaleBT709( ), new Sepia( ) );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image #1:
Result image #2:
Format translations dictionary.
See for more information.
Filter level value, [0, 1].
Filtering factor determines portion of the destionation image, which is formed
as a result of merging source images using specified texture.
Default value is set to 1.0.
See class description for more details.
Preserve level value
Preserving factor determines portion taken from the image produced
by (or from original source) without applying textured
merge to it.
Default value is set to 0.0.
See class description for more details.
Generated texture.
Two dimensional array of texture intensities.
Size of the provided texture should be the same as size of images, which will
be passed to the filter.
The property has priority over this property - if
generator is specified than the static generated texture is not used.
Texture generator.
Generator used to generate texture.
The property has priority over the property.
First filter.
Filter, which is used to produce first image for the merge. The filter
needs to implement interface, so it could be possible
to get information about the filter. The filter must be able to process color 24 bpp
images and produce color 24 bpp or grayscale 8 bppp images as result.
The specified filter does not support 24 bpp color images.
The specified filter does not produce image of supported format.
The specified filter does not implement IFilterInformation interface.
Second filter
Filter, which is used to produce second image for the merge. The filter
needs to implement interface, so it could be possible
to get information about the filter. The filter must be able to process color 24 bpp
images and produce color 24 bpp or grayscale 8 bppp images as result.
The filter may be set to . In this case original source image
is used as a second image for the merge.
The specified filter does not support 24 bpp color images.
The specified filter does not produce image of supported format.
The specified filter does not implement IFilterInformation interface.
Initializes a new instance of the class.
Generated texture.
First filter.
Initializes a new instance of the class.
Generated texture.
First filter.
Second filter.
Initializes a new instance of the class.
Texture generator.
First filter.
Initializes a new instance of the class.
Texture generator.
First filter.
Second filter.
Process the filter on the specified image.
Source image data.
Destination image data.
Texture size does not match image size.
Filters should not change image dimension.
Merge two images using factors from texture.
The filter is similar to filter in its idea, but
instead of using single value for balancing amount of source's and overlay's image
values (see ), the filter uses texture, which determines
the amount to take from source image and overlay image.
The filter uses specified texture to adjust values using the next formula:
dst = src * textureValue + ovr * ( 1.0 - textureValue ),
where src is value of pixel in a source image, ovr is value of pixel in
overlay image, dst is value of pixel in a destination image and
textureValue is corresponding value from provided texture (see or
).
The filter accepts 8 bpp grayscale and 24 bpp color images for processing.
Sample usage #1:
// create filter
TexturedMerge filter = new TexturedMerge( new TextileTexture( ) );
// create an overlay image to merge with
filter.OverlayImage = new Bitmap( image.Width, image.Height,
PixelFormat.Format24bppRgb );
// fill the overlay image with solid color
PointedColorFloodFill fillFilter = new PointedColorFloodFill( Color.DarkKhaki );
fillFilter.ApplyInPlace( filter.OverlayImage );
// apply the merge filter
filter.ApplyInPlace( image );
Sample usage #2:
// create filter
TexturedMerge filter = new TexturedMerge( new CloudsTexture( ) );
// create 2 images with modified Hue
HueModifier hm1 = new HueModifier( 50 );
HueModifier hm2 = new HueModifier( 200 );
filter.OverlayImage = hm2.Apply( image );
hm1.ApplyInPlace( image );
// apply the merge filter
filter.ApplyInPlace( image );
Initial image:
Result image #1:
Result image #2:
Format translations dictionary.
See for more information.
Generated texture.
Two dimensional array of texture intensities.
In the case if image passed to the filter is smaller or
larger than the specified texture, than image's region is processed, which equals to the
minimum overlapping area.
The property has priority over this property - if
generator is specified than the static generated texture is not used.
Texture generator.
Generator used to generate texture.
The property has priority over the property.
Initializes a new instance of the class.
Generated texture.
Initializes a new instance of the class.
Texture generator.
Process the filter on the specified image.
Source image data.
Overlay image data.
Texturer filter.
Adjust pixels’ color values using factors from the given texture. In conjunction with different type
of texture generators, the filter may produce different type of interesting effects.
The filter uses specified texture to adjust values using the next formula:
dst = src * + src * * textureValue,
where src is value of pixel in a source image, dst is value of pixel in a destination image and
textureValue is corresponding value from provided texture (see or
). Using and values it is possible
to control the portion of source data affected by texture.
In most cases the and properties are set in such
way, that + = 1. But there is no limitations actually
for those values, so their sum may be as greater, as lower than 1 in order create different type of
effects.
The filter accepts 8 bpp grayscale and 24 bpp color images for processing.
Sample usage:
// create filter
Texturer filter = new Texturer( new TextileTexture( ), 0.3, 0.7 );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
See for more information.
Filter level value.
Filtering factor determines image fraction to filter - to multiply
by values from the provided texture.
Default value is set to 0.5.
See class description for more details.
Preserve level value.
Preserving factor determines image fraction to keep from filtering.
Default value is set to 0.5.
See class description for more details.
Generated texture.
Two dimensional array of texture intensities.
In the case if image passed to the filter is smaller or
larger than the specified texture, than image's region is processed, which equals to the
minimum overlapping area.
The property has priority over this property - if
generator is specified than the static generated texture is not used.
Texture generator.
Generator used to generate texture.
The property has priority over the property.
Initializes a new instance of the class.
Generated texture.
Initializes a new instance of the class.
Generated texture.
Filter level value (see property).
Preserve level value (see property).
Initializes a new instance of the class.
Texture generator.
Initializes a new instance of the class.
Texture generator.
Filter level value (see property).
Preserve level value (see property).
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Vertical run length smoothing algorithm.
The class implements vertical run length smoothing algorithm, which
is described in: K.Y. Wong, R.G. Casey and F.M. Wahl, "Document analysis system,"
IBM J. Res. Devel., Vol. 26, NO. 6,111). 647-656, 1982.
Unlike the original description of this algorithm, this implementation must be applied
to inverted binary images containing document, i.e. white text on black background. So this
implementation fills vertical black gaps between white pixels.
This algorithm is usually used together with ,
and then further analysis of white blobs.
The filter accepts 8 bpp grayscale images, which are supposed to be binary inverted documents.
Sample usage:
// create filter
VerticalRunLengthSmoothing vrls = new VerticalRunLengthSmoothing( 32 );
// apply the filter
vrls.ApplyInPlace( image );
Source image:
Result image:
Maximum gap size to fill (in pixels).
The property specifies maximum vertical gap between white pixels to fill.
If number of black pixels between some white pixels is bigger than this value, then those
black pixels are left as is; otherwise the gap is filled with white pixels.
Default value is set to 10. Minimum value is 1. Maximum value is 1000.
Process gaps between objects and image borders or not.
The property sets if gaps between image borders and objects must be treated as
gaps between objects and also filled.
Default value is set to .
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Initializes a new instance of the class.
Maximum gap size to fill (see ).
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Simple water wave effect filter.
The image processing filter implements simple water wave effect. Using
properties of the class, it is possible to set number of vertical/horizontal waves,
as well as their amplitude.
Bilinear interpolation is used to create smooth effect.
The filter accepts 8 bpp grayscale images and 24/32
color images for processing.
Sample usage:
// create filter
WaterWave filter = new WaterWave( );
filter.HorizontalWavesCount = 10;
filter.HorizontalWavesAmplitude = 5;
filter.VerticalWavesCount = 3;
filter.VerticalWavesAmplitude = 15;
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Number of horizontal waves, [1, 10000].
Default value is set to 5.
Number of vertical waves, [1, 10000].
Default value is set to 5.
Amplitude of horizontal waves measured in pixels, [0, 10000].
Default value is set to 10.
Amplitude of vertical waves measured in pixels, [0, 10000].
Default value is set to 10.
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Adaptive Smoothing - noise removal with edges preserving.
The filter is aimed to perform image smoothing, but keeping sharp edges.
This makes it applicable to additive noise removal and smoothing objects' interiors, but
not applicable for spikes (salt and pepper noise) removal.
The next calculations are done for each pixel:
- weights are calculate for 9 pixels - pixel itself and 8 neighbors:
w(x, y) = exp( -1 * (Gx^2 + Gy^2) / (2 * factor^2) )
Gx(x, y) = (I(x + 1, y) - I(x - 1, y)) / 2
Gy(x, y) = (I(x, y + 1) - I(x, y - 1)) / 2
,
where factor is a configurable value determining smoothing's quality.
- sum of 9 weights is calclated (weightTotal);
- sum of 9 weighted pixel values is calculatd (total);
- destination pixel is calculated as total / weightTotal.
Description of the filter was found in "An Edge Detection Technique Using
the Facet Model and Parameterized Relaxation Labeling" by Ioannis Matalas, Student Member,
IEEE, Ralph Benjamin, and Richard Kitney.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter
AdaptiveSmoothing filter = new AdaptiveSmoothing( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Factor value.
Factor determining smoothing quality (see
documentation).
Default value is set to 3.
Initializes a new instance of the class.
Initializes a new instance of the class.
Factor value.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Bilateral filter implementation - edge preserving smoothing and noise reduction that uses chromatic and spatial factors.
Bilateral filter conducts "selective" Gaussian smoothing of areas of same color (domains) which removes noise and contrast artifacts
while preserving sharp edges.
Two major parameters and define the result of the filter.
By changing these parameters you may achieve either only noise reduction with little change to the
image or get nice looking effect to the entire image.
Although the filter can use parallel processing large values
(greater than 25) on high resolution images may decrease speed of processing. Also on high
resolution images small values (less than 9) may not provide noticeable
results.
More details on the algorithm can be found by following this
link.
The filter accepts 8 bpp grayscale images and 24/32 bpp color images for processing.
Sample usage:
// create filter
BilateralSmoothing filter = new BilateralSmoothing( );
filter.KernelSize = 7;
filter.SpatialFactor = 10;
filter.ColorFactor = 60;
filter.ColorPower = 0.5;
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Specifies if exception must be thrown in the case a large
kernel size is used which may lead
to significant performance issues.
Default value is set to .
Enable or not parallel processing on multi-core CPUs.
If the property is set to , then this image processing
routine will run in parallel on the systems with multiple core/CPUs.
Default value is set to .
Size of a square for limiting surrounding pixels that take part in calculations, [3, 255].
The greater the value the more is the general power of the filter. Small values
(less than 9) on high resolution images (3000 pixels wide) do not give significant results.
Large values increase the number of calculations and degrade performance.
The value of this property must be an odd integer in the [3, 255] range if
is set to or in the [3, 25] range
otherwise.
Default value is set to 9.
The specified value is out of range (see
eception message for details).
The value of this must be an odd integer.
Determines smoothing power within a color domain (neighbor pixels of similar color), >= 1.
Default value is set to 10.
Exponent power, used in Spatial function calculation, >= 1.
Default value is set to 2.
Determines the variance of color for a color domain, >= 1.
Default value is set to 50.
Exponent power, used in Color function calculation, >= 1.
Default value is set to 2.
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Conservative smoothing.
The filter implements conservative smoothing, which is a noise reduction
technique that derives its name from the fact that it employs a simple, fast filtering
algorithm that sacrifices noise suppression power in order to preserve the high spatial
frequency detail (e.g. sharp edges) in an image. It is explicitly designed to remove noise
spikes - isolated pixels of exceptionally low or high pixel intensity
(salt and pepper noise).
If the filter finds a pixel which has minimum/maximum value compared to its surrounding
pixel, then its value is replaced by minimum/maximum value of those surrounding pixel.
For example, lets suppose the filter uses kernel size of 3x3,
which means each pixel has 8 surrounding pixel. If pixel's value is smaller than any value
of surrounding pixels, then the value of the pixel is replaced by minimum value of those surrounding
pixels.
The filter accepts 8 bpp grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// create filter
ConservativeSmoothing filter = new ConservativeSmoothing( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Kernel size, [3, 25].
Determines the size of pixel's square used for smoothing.
Default value is set to 3.
The value should be odd.
Initializes a new instance of the class.
Initializes a new instance of the class.
Kernel size.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Median filter.
The median filter is normally used to reduce noise in an image, somewhat like
the mean filter. However, it often does a better job than the mean
filter of preserving useful detail in the image.
Each pixel of the original source image is replaced with the median of neighboring pixel
values. The median is calculated by first sorting all the pixel values from the surrounding
neighborhood into numerical order and then replacing the pixel being considered with the
middle pixel value.
The filter accepts 8 bpp grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// create filter
Median filter = new Median( );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Processing square size for the median filter, [3, 25].
Default value is set to 3.
The value should be odd.
Initializes a new instance of the class.
Initializes a new instance of the class.
Processing square size.
Process the filter on the specified image.
Source image data.
Destination image data.
Image rectangle for processing by the filter.
Performs backward quadrilateral transformation into an area in destination image.
The class implements backward quadrilateral transformation algorithm,
which allows to transform any rectangular image into any quadrilateral area
in a given destination image. The idea of the algorithm is based on homogeneous
transformation and its math is described by Paul Heckbert in his
"Projective Mappings for Image Warping" paper.
The image processing routines implements similar math to ,
but performs it in backward direction.
The image processing filter accepts 8 grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// define quadrilateral's corners
List<IntPoint> corners = new List<IntPoint>( );
corners.Add( new IntPoint( 99, 99 ) );
corners.Add( new IntPoint( 156, 79 ) );
corners.Add( new IntPoint( 184, 126 ) );
corners.Add( new IntPoint( 122, 150 ) );
// create filter
BackwardQuadrilateralTransformation filter =
new BackwardQuadrilateralTransformation( sourceImage, corners );
// apply the filter
Bitmap newImage = filter.Apply( image );
Source image:
Destination image:
Result image:
Format translations dictionary.
See
documentation for additional information.
Source image to be transformed into specified quadrilateral.
The property sets the source image, which will be transformed
to the specified quadrilateral and put into destination image the filter is applied to.
The source image must have the same pixel format as a destination image the filter
is applied to. Otherwise exception will be generated when filter is applied.
Setting this property will clear the property -
only one source image is allowed: managed or unmanaged.
Source unmanaged image to be transformed into specified quadrilateral.
The property sets the source image, which will be transformed
to the specified quadrilateral and put into destination image the filter is applied to.
The source image must have the same pixel format as a destination image the filter
is applied to. Otherwise exception will be generated when filter is applied.
Setting this property will clear the property -
only one source image is allowed: managed or unmanaged.
Quadrilateral in destination image to transform into.
The property specifies 4 corners of a quadrilateral area
in destination image where the source image will be transformed into.
Specifies if bilinear interpolation should be used or not.
Default value is set to - interpolation
is used.
Initializes a new instance of the class.
Initializes a new instance of the class.
Source image to be transformed into specified quadrilateral
(see ).
Initializes a new instance of the class.
Source unmanaged image to be transformed into specified quadrilateral
(see ).
Initializes a new instance of the class.
Source image to be transformed into specified quadrilateral
(see ).
Quadrilateral in destination image to transform into.
Initializes a new instance of the class.
Source unmanaged image to be transformed into specified quadrilateral
(see ).
Quadrilateral in destination image to transform into.
Process the filter on the specified image.
Image data to process by the filter.
Destination quadrilateral was not set.
Crop an image.
The filter crops an image providing a new image, which contains only the specified
rectangle of the original image.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
Crop filter = new Crop( new Rectangle( 75, 75, 320, 240 ) );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Rectangle to crop.
Initializes a new instance of the class.
Rectangle to crop.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Process the filter on the specified image.
Source image data.
Destination image data.
Performs quadrilateral transformation of an area in a given source image.
The class implements quadrilateral transformation algorithm,
which allows to transform any quadrilateral from a given source image
to a rectangular image. The idea of the algorithm is based on homogeneous
transformation and its math is described by Paul Heckbert in his
"Projective Mappings for Image Warping" paper.
The image processing filter accepts 8 grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// define quadrilateral's corners
List<IntPoint> corners = new List<IntPoint>( );
corners.Add( new IntPoint( 99, 99 ) );
corners.Add( new IntPoint( 156, 79 ) );
corners.Add( new IntPoint( 184, 126 ) );
corners.Add( new IntPoint( 122, 150 ) );
// create filter
QuadrilateralTransformation filter =
new QuadrilateralTransformation( corners, 200, 200 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
New image width.
New image height.
Automatic calculation of destination image or not.
The property specifies how to calculate size of destination (transformed)
image. If the property is set to , then
and properties have effect and destination image's size is
specified by user. If the property is set to , then setting the above
mentioned properties does not have any effect, but destionation image's size is
automatically calculated from property - width and height
come from length of longest edges.
Default value is set to .
Quadrilateral's corners in source image.
The property specifies four corners of the quadrilateral area
in the source image to be transformed.
Width of the new transformed image.
The property defines width of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's width
is calculated automatically based on property.
Height of the new transformed image.
The property defines height of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's height
is calculated automatically based on property.
Specifies if bilinear interpolation should be used or not.
Default value is set to - interpolation
is used.
Initializes a new instance of the class.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
Width of the new transformed image.
Height of the new transformed image.
This constructor sets to
, which means that destination image will have width and
height as specified by user.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
This constructor sets to
, which means that destination image will have width and
height automatically calculated based on property.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Source quadrilateral was not set.
Process the filter on the specified image.
Source image data.
Destination image data.
Performs quadrilateral transformation using bilinear algorithm for interpolation.
The class is deprecated and should be used instead.
Format translations dictionary.
Automatic calculation of destination image or not.
The property specifies how to calculate size of destination (transformed)
image. If the property is set to , then
and properties have effect and destination image's size is
specified by user. If the property is set to , then setting the above
mentioned properties does not have any effect, but destionation image's size is
automatically calculated from property - width and height
come from length of longest edges.
Quadrilateral's corners in source image.
The property specifies four corners of the quadrilateral area
in the source image to be transformed.
Width of the new transformed image.
The property defines width of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's width
is calculated automatically based on property.
Height of the new transformed image.
The property defines height of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's height
is calculated automatically based on property.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
Width of the new transformed image.
Height of the new transformed image.
This constructor sets to
, which means that destination image will have width and
height as specified by user.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
This constructor sets to
, which means that destination image will have width and
height automatically calculated based on property.
Process the filter on the specified image.
Source image data.
Destination image data.
Calculates new image size.
Source image data.
New image size - size of the destination image.
The specified quadrilateral's corners are outside of the given image.
Performs quadrilateral transformation using nearest neighbor algorithm for interpolation.
The class is deprecated and should be used instead.
Format translations dictionary.
Automatic calculation of destination image or not.
The property specifies how to calculate size of destination (transformed)
image. If the property is set to , then
and properties have effect and destination image's size is
specified by user. If the property is set to , then setting the above
mentioned properties does not have any effect, but destionation image's size is
automatically calculated from property - width and height
come from length of longest edges.
Quadrilateral's corners in source image.
The property specifies four corners of the quadrilateral area
in the source image to be transformed.
Width of the new transformed image.
The property defines width of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's width
is calculated automatically based on property.
Height of the new transformed image.
The property defines height of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's height
is calculated automatically based on property.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
Width of the new transformed image.
Height of the new transformed image.
This constructor sets to
, which means that destination image will have width and
height as specified by user.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
This constructor sets to
, which means that destination image will have width and
height automatically calculated based on property.
Process the filter on the specified image.
Source image data.
Destination image data.
Calculates new image size.
Source image data.
New image size - size of the destination image.
The specified quadrilateral's corners are outside of the given image.
Resize image using bicubic interpolation algorithm.
The class implements image resizing filter using bicubic
interpolation algorithm. It uses bicubic kernel W(x) as described on
Wikipedia
(coefficient a is set to -0.5).
The filter accepts 8 grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter
ResizeBicubic filter = new ResizeBicubic( 400, 300 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Width of new image.
Height of new image.
Process the filter on the specified image.
Source image data.
Destination image data.
Resize image using bilinear interpolation algorithm.
The class implements image resizing filter using bilinear
interpolation algorithm.
The filter accepts 8 grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// create filter
ResizeBilinear filter = new ResizeBilinear( 400, 300 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Width of the new image.
Height of the new image.
Process the filter on the specified image.
Source image data.
Destination image data.
Resize image using nearest neighbor algorithm.
The class implements image resizing filter using nearest
neighbor algorithm, which does not assume any interpolation.
The filter accepts 8 and 16 bpp grayscale images and 24, 32, 48 and 64 bpp
color images for processing.
Sample usage:
// create filter
ResizeNearestNeighbor filter = new ResizeNearestNeighbor( 400, 300 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Width of the new image.
Height of the new image.
Process the filter on the specified image.
Source image data.
Destination image data.
Rotate image using bicubic interpolation.
The class implements image rotation filter using bicubic
interpolation algorithm. It uses bicubic kernel W(x) as described on
Wikipedia
(coefficient a is set to -0.5).
Rotation is performed in counterclockwise direction.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter - rotate for 30 degrees keeping original image size
RotateBicubic filter = new RotateBicubic( 30, true );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Rotation angle.
This constructor sets property
to .
Initializes a new instance of the class.
Rotation angle.
Keep image size or not.
Process the filter on the specified image.
Source image data.
Destination image data.
Rotate image using bilinear interpolation.
Rotation is performed in counterclockwise direction.
The class implements image rotation filter using bilinear
interpolation algorithm.
The filter accepts 8 bpp grayscale images and 24 bpp
color images for processing.
Sample usage:
// create filter - rotate for 30 degrees keeping original image size
RotateBilinear filter = new RotateBilinear( 30, true );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Rotation angle.
This constructor sets property
to .
Initializes a new instance of the class.
Rotation angle.
Keep image size or not.
Process the filter on the specified image.
Source image data.
Destination image data.
Shrink an image by removing specified color from its boundaries.
Removes pixels with specified color from image boundaries making
the image smaller in size.
The filter accepts 8 bpp grayscale and 24 bpp color images for processing.
Sample usage:
// create filter
Shrink filter = new Shrink( Color.Black );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Color to remove from boundaries.
Initializes a new instance of the class.
Initializes a new instance of the class.
Color to remove from boundaries.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Process the filter on the specified image.
Source image data.
Destination image data.
Performs quadrilateral transformation of an area in the source image.
The class implements simple algorithm described by
Olivier Thill
for transforming quadrilateral area from a source image into rectangular image.
The idea of the algorithm is based on finding for each line of destination
rectangular image a corresponding line connecting "left" and "right" sides of
quadrilateral in a source image. Then the line is linearly transformed into the
line in destination image.
Due to simplicity of the algorithm it does not do any correction for perspective.
To make sure the algorithm works correctly, it is preferred if the
"left-top" corner of the quadrilateral (screen coordinates system) is
specified first in the list of quadrilateral's corners. At least
user need to make sure that the "left" side (side connecting first and the last
corner) and the "right" side (side connecting second and third corners) are
not horizontal.
Use to avoid the above mentioned limitations,
which is a more advanced quadrilateral transformation algorithms (although a bit more
computationally expensive).
The image processing filter accepts 8 grayscale images and 24/32 bpp
color images for processing.
Sample usage:
// define quadrilateral's corners
List<IntPoint> corners = new List<IntPoint>( );
corners.Add( new IntPoint( 99, 99 ) );
corners.Add( new IntPoint( 156, 79 ) );
corners.Add( new IntPoint( 184, 126 ) );
corners.Add( new IntPoint( 122, 150 ) );
// create filter
SimpleQuadrilateralTransformation filter =
new SimpleQuadrilateralTransformation( corners, 200, 200 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
See
documentation for additional information.
New image width.
New image height.
Automatic calculation of destination image or not.
The property specifies how to calculate size of destination (transformed)
image. If the property is set to , then
and properties have effect and destination image's size is
specified by user. If the property is set to , then setting the above
mentioned properties does not have any effect, but destionation image's size is
automatically calculated from property - width and height
come from length of longest edges.
Default value is set to .
Quadrilateral's corners in source image.
The property specifies four corners of the quadrilateral area
in the source image to be transformed.
See documentation to the
class itself for additional information.
Width of the new transformed image.
The property defines width of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's width
is calculated automatically based on property.
Height of the new transformed image.
The property defines height of the destination image, which gets
transformed quadrilateral image.
Setting the property does not have any effect, if
property is set to . In this case destination image's height
is calculated automatically based on property.
Specifies if bilinear interpolation should be used or not.
Default value is set to - interpolation
is used.
Initializes a new instance of the class.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
Width of the new transformed image.
Height of the new transformed image.
This constructor sets to
, which means that destination image will have width and
height as specified by user.
Initializes a new instance of the class.
Corners of the source quadrilateral area.
This constructor sets to
, which means that destination image will have width and
height automatically calculated based on property.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Source quadrilateral was not set.
Process the filter on the specified image.
Source image data.
Destination image data.
Transform polar image into rectangle.
The image processing routine is opposite transformation to the one done by
routine, i.e. transformation from polar image into rectangle. The produced effect is similar to GIMP's
"Polar Coordinates" distortion filter (or its equivalent in Photoshop).
The filter accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
TransformFromPolar filter = new TransformFromPolar( );
filter.OffsetAngle = 0;
filter.CirlceDepth = 1;
filter.UseOriginalImageSize = false;
filter.NewSize = new Size( 360, 120 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Circularity coefficient of the mapping, [0, 1].
The property specifies circularity coefficient of the mapping to be done.
If the coefficient is set to 1, then destination image will be produced by mapping
ideal circle from the source image, which is placed in source image's centre and its
radius equals to the minimum distance from centre to the image’s edge. If the coefficient
is set to 0, then the mapping will use entire area of the source image (circle will
be extended into direction of edges). Changing the property from 0 to 1 user may balance
circularity of the produced output.
Default value is set to 1.
Offset angle used to shift mapping, [-360, 360] degrees.
The property specifies offset angle, which can be used to shift
mapping in clockwise direction. For example, if user sets this property to 30, then
start of polar mapping is shifted by 30 degrees in clockwise direction.
Default value is set to 0.
Specifies direction of mapping.
The property specifies direction of mapping source image. If the
property is set to , the image is mapped in clockwise direction;
otherwise in counter clockwise direction.
Default value is set to .
Specifies if centre of the source image should to top or bottom of the result image.
The property specifies position of the source image's centre in the destination image.
If the property is set to , then it goes to the top of the result image;
otherwise it goes to the bottom.
Default value is set to .
Size of destination image.
The property specifies size of result image produced by this image
processing routine in the case if property
is set to .
Both width and height must be in the [1, 10000] range.
Default value is set to 200 x 200.
Use source image size for destination or not.
The property specifies if the image processing routine should create destination
image of the same size as original image or of the size specified by
property.
Default value is set to .
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Process the filter on the specified image.
Source image data.
Destination image data.
Transform rectangle image into circle (to polar coordinates).
The image processing routine does transformation of the source image into
circle (polar transformation). The produced effect is similar to GIMP's "Polar Coordinates"
distortion filter (or its equivalent in Photoshop).
The filter accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// create filter
TransformToPolar filter = new TransformToPolar( );
filter.OffsetAngle = 0;
filter.CirlceDepth = 1;
filter.UseOriginalImageSize = false;
filter.NewSize = new Size( 200, 200 );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Circularity coefficient of the mapping, [0, 1].
The property specifies circularity coefficient of the mapping to be done.
If the coefficient is set to 1, then the mapping will produce ideal circle. If the coefficient
is set to 0, then the mapping will occupy entire area of the destination image (circle will
be extended into direction of edges). Changing the property from 0 to 1 user may balance
circularity of the produced output.
Default value is set to 1.
Offset angle used to shift mapping, [-360, 360] degrees.
The property specifies offset angle, which can be used to shift
mapping in counter clockwise direction. For example, if user sets this property to 30, then
start of polar mapping is shifted by 30 degrees in counter clockwise direction.
Default value is set to 0.
Specifies direction of mapping.
The property specifies direction of mapping source image's X axis. If the
property is set to , the image is mapped in clockwise direction;
otherwise in counter clockwise direction.
Default value is set to .
Specifies if top of the source image should go to center or edge of the result image.
The property specifies position of the source image's top line in the destination
image. If the property is set to , then it goes to the center of the result image;
otherwise it goes to the edge.
Default value is set to .
Fill color to use for unprocessed areas.
The property specifies fill color, which is used to fill unprocessed areas.
In the case if is greater than 0, then there will be some areas on
the image's edge, which are not filled by the produced "circular" image, but are filled by
the specified color.
Default value is set to .
Size of destination image.
The property specifies size of result image produced by this image
processing routine in the case if property
is set to .
Both width and height must be in the [1, 10000] range.
Default value is set to 200 x 200.
Use source image size for destination or not.
The property specifies if the image processing routine should create destination
image of the same size as original image or of the size specified by
property.
Default value is set to .
Format translations dictionary.
See
documentation for additional information.
Initializes a new instance of the class.
Calculates new image size.
Source image data.
New image size - size of the destination image.
Process the filter on the specified image.
Source image data.
Destination image data.
Extract YCbCr channel from image.
The filter extracts specified YCbCr channel of color image and returns
it in the form of grayscale image.
The filter accepts 24 and 32 bpp color images and produces
8 bpp grayscale images.
Sample usage:
// create filter
YCbCrExtractChannel filter = new YCbCrExtractChannel( YCbCr.CrIndex );
// apply the filter
Bitmap crChannel = filter.Apply( image );
Format translations dictionary.
YCbCr channel to extract.
Default value is set to (Y channel).
Invalid channel was specified.
Initializes a new instance of the class.
Initializes a new instance of the class.
YCbCr channel to extract.
Process the filter on the specified image.
Source image data.
Destination image data.
Color filtering in YCbCr color space.
The filter operates in YCbCr color space and filters
pixels, which color is inside/outside of the specified YCbCr range -
it keeps pixels with colors inside/outside of the specified range and fills the
rest with specified color.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
YCbCrFiltering filter = new YCbCrFiltering( );
// set color ranges to keep
filter.Cb = new Range( -0.2f, 0.0f );
filter.Cr = new Range( 0.26f, 0.5f );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
Range of Y component, [0, 1].
Range of Cb component, [-0.5, 0.5].
Range of Cr component, [-0.5, 0.5].
Fill color used to fill filtered pixels.
Determines, if pixels should be filled inside or outside specified
color range.
Default value is set to , which means
the filter removes colors outside of the specified range.
Determines, if Y value of filtered pixels should be updated.
The property specifies if Y channel of filtered pixels should be
updated with value from fill color or not.
Default value is set to .
Determines, if Cb value of filtered pixels should be updated.
The property specifies if Cb channel of filtered pixels should be
updated with value from fill color or not.
Default value is set to .
Determines, if Cr value of filtered pixels should be updated.
The property specifies if Cr channel of filtered pixels should be
updated with value from fill color or not.
Default value is set to .
Initializes a new instance of the class.
Initializes a new instance of the class.
Range of Y component.
Range of Cb component.
Range of Cr component.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Linear correction of YCbCr channels.
The filter operates in YCbCr color space and provides
with the facility of linear correction of its channels - mapping specified channels'
input ranges to specified output ranges.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create filter
YCbCrLinear filter = new YCbCrLinear( );
// configure the filter
filter.InCb = new Range( -0.276f, 0.163f );
filter.InCr = new Range( -0.202f, 0.500f );
// apply the filter
filter.ApplyInPlace( image );
Initial image:
Result image:
Y component's input range.
Y component is measured in the range of [0, 1].
Cb component's input range.
Cb component is measured in the range of [-0.5, 0.5].
Cr component's input range.
Cr component is measured in the range of [-0.5, 0.5].
Y component's output range.
Y component is measured in the range of [0, 1].
Cb component's output range.
Cb component is measured in the range of [-0.5, 0.5].
Cr component's output range.
Cr component is measured in the range of [-0.5, 0.5].
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Replace channel of YCbCr color space.
Replaces specified YCbCr channel of color image with
specified grayscale imge.
The filter is quite useful in conjunction with filter
(however may be used alone in some cases). Using the filter
it is possible to extract one of YCbCr channel, perform some image processing with it and then
put it back into the original color image.
The filter accepts 24 and 32 bpp color images for processing.
Sample usage:
// create YCbCrExtractChannel filter for channel extracting
YCbCrExtractChannel extractFilter = new YCbCrExtractChannel(
YCbCr.CbIndex );
// extract Cb channel
Bitmap cbChannel = extractFilter.Apply( image );
// invert the channel
Invert invertFilter = new Invert( );
invertFilter.ApplyInPlace( cbChannel );
// put the channel back into the source image
YCbCrReplaceChannel replaceFilter = new YCbCrReplaceChannel(
YCbCr.CbIndex, cbChannel );
replaceFilter.ApplyInPlace( image );
Initial image:
Result image:
Format translations dictionary.
YCbCr channel to replace.
Default value is set to (Y channel).
Invalid channel was specified.
Grayscale image to use for channel replacement.
Setting this property will clear the property -
only one channel image is allowed: managed or unmanaged.
Channel image should be 8bpp indexed image (grayscale).
Unmanaged grayscale image to use for channel replacement.
Setting this property will clear the property -
only one channel image is allowed: managed or unmanaged.
Channel image should be 8bpp indexed image (grayscale).
Initializes a new instance of the class.
YCbCr channel to replace.
Initializes a new instance of the class.
YCbCr channel to replace.
Channel image to use for replacement.
Initializes a new instance of the class.
YCbCr channel to replace.
Unmanaged channel image to use for replacement.
Process the filter on the specified image.
Source image data.
Image rectangle for processing by the filter.
Channel image was not specified.
Channel image size does not match source
image size.
Difference of Gaussians filter.
In imaging science, the difference of Gaussians is a feature
enhancement algorithm that involves the subtraction of one blurred
version of an original image from another, less blurred version of
the original.
In the simple case of grayscale images, the blurred images are
obtained by convolving the original grayscale images with Gaussian
kernels having differing standard deviations. Blurring an image using
a Gaussian kernel suppresses only high-frequency spatial information.
Subtracting one image from the other preserves spatial information that
lies between the range of frequencies that are preserved in the two blurred
images. Thus, the difference of Gaussians is a band-pass filter that
discards all but a handful of spatial frequencies that are present in the
original grayscale image.
This filter implementation has been contributed by Diego Catalano.
References:
-
Wikipedia contributors. "Difference of Gaussians." Wikipedia, The Free
Encyclopedia. Wikipedia, The Free Encyclopedia, 1 Jun. 2013. Web. 10 Feb.
2014.
Bitmap image = ... // Lena's famous picture
// Create a new Difference of Gaussians
var DoG = new DifferenceOfGaussians();
// Apply the filter
Bitmap result = DoG.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below.
Gets or sets the first Gaussian filter.
Gets or sets the second Gaussian filter.
Gets or sets the subtract filter used to compute
the difference of the two Gaussian blurs.
Format translations dictionary.
Initializes a new instance of the class.
Initializes a new instance of the class.
The first window size. Default is 3
The second window size. Default is 4.
Initializes a new instance of the class.
The window size for the first Gaussian. Default is 3
The window size for the second Gaussian. Default is 4.
The sigma for the first Gaussian. Default is 0.4.
The sigma for the second Gaussian. Default is 0.4
Initializes a new instance of the class.
The window size for the first Gaussian. Default is 3
The window size for the second Gaussian. Default is 4.
The sigma for both Gaussian filters. Default is 0.4.
Process the filter on the specified image.
Source image data.
Fast Variance filter.
The Fast Variance filter replaces each pixel in an image by its
neighborhood online variance. This filter differs from the
filter because it uses only a single pass
over the image.
Bitmap image = ... // Lena's picture
// Create a new Variance filter:
var variance = new FastVariance();
// Compute the filter
Bitmap result = variance.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below:
Gets or sets the radius of the neighborhood
used to compute a pixel's local variance.
Initializes a new instance of the class.
Initializes a new instance of the class.
The radius neighborhood used to compute a pixel's local variance.
Format translations dictionary.
Process the filter on the specified image.
Source image data.
Destination image data.
High boost filter.
The High-boost filter can be used to emphasize high frequency
components (i.e. points of contrast) without removing the low
frequency ones.
This filter implementation has been contributed by Diego Catalano.
Kernel size, [3, 21].
Size of Gaussian kernel.
Default value is set to 5.
Gets or sets the boost value. Default is 9.
Initializes a new instance of the class.
Initializes a new instance of the class.
The boost value. Default is 8.
Initializes a new instance of the class.
The boost value. Default is 8.
The kernel size. Default is 3.
Extract the biggest blob from image.
The filter locates the biggest blob in the source image and extracts it.
The filter also can use the source image for the biggest blob's location only, but extract it from
another image, which is set using property. The original image
usually is the source of the processed image.
The filter accepts 8 bpp grayscale images and 24/32 color images for processing as source image passed to
method and also for the .
Sample usage:
// create filter
var filter = new ExtractBiggestBlob();
// apply the filter
Bitmap biggestBlobsImage = filter.Apply(image);
Initial image:
Result image:
Position of the extracted blob.
After applying the filter this property keeps position of the extracted
blob in the source image.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See for more information.
Original image, which is the source of the processed image where the biggest blob is searched for.
The property may be set to . In this case the biggest blob
is extracted from the image, which is passed to image.
Apply filter to an image.
Source image to get biggest blob from.
Returns image of the biggest blob.
Unsupported pixel format of the source image.
Unsupported pixel format of the original image.
Source and original images must have the same size.
The source image does not contain any blobs.
Apply filter to an image.
Source image to get biggest blob from.
Returns image of the biggest blob.
Unsupported pixel format of the source image.
Unsupported pixel format of the original image.
Source and original images must have the same size.
The source image does not contain any blobs.
Apply filter to an image (not implemented).
Image in unmanaged memory.
Returns filter's result obtained by applying the filter to
the source image.
Apply filter to an image (not implemented).
Source image to be processed.
Destination image to store filter's result.
RG Chromaticity.
References:
-
Wikipedia contributors. "rg chromaticity." Wikipedia, The Free Encyclopedia. Wikipedia,
The Free Encyclopedia. Available at http://en.wikipedia.org/wiki/Rg_chromaticity
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Sauvola Threshold.
The Sauvola filter is a variation of the
thresholding filter.
This filter implementation has been contributed by Diego Catalano.
References:
-
Sauvola, Jaakko, and Matti Pietikäinen. "Adaptive document image binarization."
Pattern Recognition 33.2 (2000): 225-236.
Bitmap image = ... // Lena's picture
// Create a new Sauvola threshold:
var sauvola = new SauvolaThreshold();
// Compute the filter
Bitmap result = sauvola.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below:
Gets or sets the filter convolution
radius. Default is 15.
Gets or sets the user-defined
parameter k. Default is 0.5.
Gets or sets the dynamic range of the
standard deviation, R. Default is 128.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Filter to mark (highlight) lines in a image.
Format translations dictionary.
Color used to mark corners.
Gets or sets the set of points to mark.
Gets or sets the width of the points to be drawn.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Niblack Threshold.
The Niblack filter is a local thresholding algorithm that separates
white and black pixels given the local mean and standard deviation
for the current window.
This filter implementation has been contributed by Diego Catalano.
References:
-
W. Niblack, An Introduction to Digital Image Processing, pp. 115-116.
Prentice Hall, 1986.
Bitmap image = ... // Lena's picture
// Create a new Niblack threshold:
var niblack = new NiblackThreshold();
// Compute the filter
Bitmap result = niblack.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below:
Gets or sets the filter convolution
radius. Default is 15.
Gets or sets the user-defined
parameter k. Default is 0.2.
Gets or sets the mean offset C. This value should
be between 0 and 255. The default value is 0.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Rotate image using nearest neighbor algorithm.
The class implements image rotation filter using nearest
neighbor algorithm, which does not assume any interpolation.
Rotation is performed in counterclockwise direction.
The filter accepts 8/16 bpp grayscale images and 24/48 bpp color image
for processing.
Sample usage:
// create filter - rotate for 30 degrees keeping original image size
RotateNearestNeighbor filter = new RotateNearestNeighbor( 30, true );
// apply the filter
Bitmap newImage = filter.Apply( image );
Initial image:
Result image:
Format translations dictionary.
Initializes a new instance of the class.
Rotation angle.
This constructor sets property to
.
Initializes a new instance of the class.
Rotation angle.
Keep image size or not.
Process the filter on the specified image.
Source image data.
Destination image data.
White Patch filter for color normalization.
Bitmap image = ... // Lena's famous picture
// Create the White Patch filter
var whitePatch = new WhitePatch();
// Apply the filter
Bitmap result = whitePatch.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Gray World filter for color normalization.
The grey world normalization makes the assumption that changes in the
lighting spectrum can be modeled by three constant factors applied to
the red, green and blue channels of color[2]. More specifically, a change
in illuminated color can be modeled as a scaling α, β and γ in the R,
G and B color channels and as such the grey world algorithm is invariant
to illumination color variations.
References:
-
Wikipedia Contributors, "Color normalization". Available at
http://en.wikipedia.org/wiki/Color_normalization
-
Jose M. Buenaposada; Luis Baumela. Variations of Grey World for
face tracking (Report).
Bitmap image = ... // Lena's famous picture
// Create a new Gray World filter
var grayWorld = new GrayWorld();
// Apply the filter
Bitmap result = grayWorld.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Kuwahara filter.
Bitmap image = ... // Lena's famous picture
// Create a new Kuwahara filter
Kuwahara kuwahara = new Kuwahara();
// Apply the Kuwahara filter
Bitmap result = kuwahara.Apply(image);
// Show on the screen
ImageBox.Show(result);
Gets the size of the kernel used in the Kuwahara filter. This
should be odd and greater than or equal to five. Default is 5.
Gets the size of each of the four inner blocks used in the
Kuwahara filter. This is always half the
kernel size minus one.
The size of the each inner block, or k / 2 - 1
where k is the kernel size.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Wolf Jolion Threshold.
The Wolf-Jolion threshold filter is a variation
of the filter.
This filter implementation has been contributed by Diego Catalano.
References:
-
C. Wolf, J.M. Jolion, F. Chassaing. "Text Localization, Enhancement and
Binarization in Multimedia Documents." Proceedings of the 16th International
Conference on Pattern Recognition, 2002.
Available in http://liris.cnrs.fr/christian.wolf/papers/icpr2002v.pdf
Bitmap image = ... // Lena's picture
// Create a new Wolf-Joulion threshold:
var wolfJoulion = new WolfJoulionThreshold();
// Compute the filter
Bitmap result = wolfJoulion.Apply(image);
// Show on the screen
ImageBox.Show(result);
Gets or sets the filter convolution
radius. Default is 15.
Gets or sets the user-defined
parameter k. Default is 0.5.
Gets or sets the dynamic range of the
standard deviation, R. Default is 128.
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Compass convolution filter.
Initializes a new instance of the class.
Format translations dictionary.
Process the filter on the specified image.
Source image data.
Destination image data.
Exponential filter.
Simple exp image filter. Applies the
function for each pixel in the image, clipping values as needed.
The resultant image can be converted back using the
filter.
Bitmap input = ...
// Apply log
Logarithm log = new Logarithm();
Bitmap output = log.Apply(input);
// Revert log
Exponential exp = new Exponential();
Bitmap reconstruction = exp.Apply(output);
// Show results on screen
ImageBox.Show("input", input);
ImageBox.Show("output", output);
ImageBox.Show("reconstruction", reconstruction);
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Log filter.
Simple log image filter. Applies the
function for each pixel in the image, clipping values as needed.
The resultant image can be converted back using the
filter.
Bitmap input = ...
// Apply log
Logarithm log = new Logarithm();
Bitmap output = log.Apply(input);
// Revert log
Exponential exp = new Exponential();
Bitmap reconstruction = exp.Apply(output);
// Show results on screen
ImageBox.Show("input", input);
ImageBox.Show("output", output);
ImageBox.Show("reconstruction", reconstruction);
Format translations dictionary.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Robinson's Edge Detector
Robinson's edge detector is a variation of
Kirsch's detector using different convolution masks. Both are examples
of compass convolution filters.
Bitmap image = ... // Lena's picture
// Create a new Robinson's edge detector:
var robinson = new RobinsonEdgeDetector();
// Compute the image edges
Bitmap edges = robinson.Apply(image);
// Show on screen
ImageBox.Show(edges);
The resulting image is shown below:
Initializes a new instance of the class.
Format translations dictionary.
Process the filter on the specified image.
Source image data.
Destination image data.
Gets the North direction Robinson kernel mask.
Gets the Northwest direction Robinson kernel mask.
Gets the West direction Robinson kernel mask.
Gets the Southwest direction Robinson kernel mask.
Gets the South direction Robinson kernel mask.
Gets the Southeast direction Robinson kernel mask.
Gets the East direction Robinson kernel mask.
Gets the Northeast direction Robinson kernel mask.
Gabor filter.
In image processing, a Gabor filter, named after Dennis Gabor, is a linear
filter used for edge detection. Frequency and orientation representations
of Gabor filters are similar to those of the human visual system, and they
have been found to be particularly appropriate for texture representation
and discrimination. In the spatial domain, a 2D Gabor filter is a Gaussian
kernel function modulated by a sinusoidal plane wave. The Gabor filters are
self-similar: all filters can be generated from one mother wavelet by dilation
and rotation.
References:
-
Wikipedia Contributors, "Gabor filter". Available at
http://en.wikipedia.org/wiki/Gabor_filter
The following example applies a Gabor filter to detect lines
at a 45 degrees from the following image:
Bitmap input = ...;
// Create a new Gabor filter
GaborFilter filter = new GaborFilter();
// Apply the filter
Bitmap output = filter.Apply(input);
// Show the output
ImageBox.Show(output);
The resulting image is shown below.
Gets or sets the size of the filter. Default is 3.
Gets or sets the Gaussian variance for the filter. Default is 2.
Gets or sets the orientation for the filter, in radians. Default is 0.6.
Gets or sets the wavelength for the filter. Default is 4.0.
Gets or sets the aspect ratio for the filter. Default is 0.3.
Gets or sets the phase offset for the filter. Default is 1.0.
Initializes a new instance of the class.
Format translations dictionary.
Process the filter on the specified image.
Source image data.
Destination image data.
Kirsch's Edge Detector
The Kirsch operator or Kirsch compass kernel
is a non-linear edge detector that finds the maximum edge strength in a few
predetermined directions. It is named after the computer scientist Russell
A. Kirsch.
References:
-
Wikipedia contributors. "Kirsch operator." Wikipedia, The Free Encyclopedia. Wikipedia,
The Free Encyclopedia. Available at http://en.wikipedia.org/wiki/Kirsch_operator
Bitmap image = ... // Lena's picture
// Create a new Kirsch's edge detector:
var kirsch = new KirschEdgeDetector();
// Compute the image edges
Bitmap edges = kirsch.Apply(image);
// Show on screen
ImageBox.Show(edges);
The resulting image is shown below:
Initializes a new instance of the class.
Format translations dictionary.
Process the filter on the specified image.
Source image data.
Destination image data.
Gets the North direction Kirsch kernel mask.
Gets the Northwest direction Kirsch kernel mask.
Gets the West direction Kirsch kernel mask.
Gets the Southwest direction Kirsch kernel mask.
Gets the South direction Kirsch kernel mask.
Gets the Southeast direction Kirsch kernel mask.
Gets the East direction Kirsch kernel mask.
Gets the Northeast direction Kirsch kernel mask.
Variance filter.
The Variance filter replaces each pixel in an image by its
neighborhood variance. The end result can be regarded as an
border enhancement, making the Variance filter suitable to
be used as an edge detection mechanism.
Bitmap image = ... // Lena's picture
// Create a new Variance filter:
var variance = new Variance();
// Compute the filter
Bitmap result = variance.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below:
Gets or sets the radius of the neighborhood
used to compute a pixel's local variance.
Initializes a new instance of the class.
Initializes a new instance of the class.
The radius neighborhood used to compute a pixel's local variance.
Format translations dictionary.
Process the filter on the specified image.
Source image data.
Destination image data.
Combine channel filter.
Format translations dictionary.
The dictionary defines, which pixel formats are supported for
source images and which pixel format will be used for resulting image.
See
for more information.
Constructs a new CombineChannel filter.
Process the filter on the specified image.
Source image data.
Rectification filter for projective transformation.
Format translations dictionary.
Gets or sets the Homography matrix used to map a image passed to
the filter to the overlay image specified at filter creation.
Gets or sets the filling color used to fill blank spaces.
The filling color will only be visible after the image is converted
to 24bpp. The alpha channel will be used internally by the filter.
Constructs a new Blend filter.
The homography matrix mapping a second image to the overlay image.
Constructs a new Blend filter.
The homography matrix mapping a second image to the overlay image.
Computes the new image size.
Process the image filter.
Filter to mark (highlight) feature points in a image.
The filter highlights feature points on the image using a given set of points.
The filter accepts 8 bpp grayscale and 24 color images for processing.
Format translations dictionary.
Gets or sets the initial size for a feature point in the map. Default is 5.
Gets or sets the set of points to mark.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Destination image data.
Linear Gradient Blending filter.
The blending filter is able to blend two images using a homography matrix.
A linear alpha gradient is used to smooth out differences between the two
images, effectively blending them in two images. The gradient is computed
considering the distance between the centers of the two images.
The first image should be passed at the moment of creation of the Blending
filter as the overlay image. A second image may be projected on top of the
overlay image by calling the Apply method and passing the second image as
argument.
Currently the filter always produces 32bpp images, disregarding the format
of source images. The alpha layer is used as an intermediate mask in the
blending process.
// Let's start with two pictures that have been
// taken from slightly different points of view:
//
Bitmap img1 = Resources.dc_left;
Bitmap img2 = Resources.dc_right;
// Those pictures are shown below:
ImageBox.Show(img1, PictureBoxSizeMode.Zoom, 640, 480);
ImageBox.Show(img2, PictureBoxSizeMode.Zoom, 640, 480);
// Step 1: Detect feature points using Surf Corners Detector
var surf = new SpeededUpRobustFeaturesDetector();
var points1 = surf.ProcessImage(img1);
var points2 = surf.ProcessImage(img2);
// Step 2: Match feature points using a k-NN
var matcher = new KNearestNeighborMatching(5);
var matches = matcher.Match(points1, points2);
// Step 3: Create the matrix using a robust estimator
var ransac = new RansacHomographyEstimator(0.001, 0.99);
MatrixH homographyMatrix = ransac.Estimate(matches);
// Step 4: Project and blend using the homography
Blend blend = new Blend(homographyMatrix, img1);
// Compute the blending algorithm
Bitmap result = blend.Apply(img2);
// Show on screen
ImageBox.Show(result, PictureBoxSizeMode.Zoom, 640, 480);
The resulting image is shown below.
Format translations dictionary.
Gets or sets the Homography matrix used to map a image passed to
the filter to the overlay image specified at filter creation.
Gets or sets the filling color used to fill blank spaces.
The filling color will only be visible after the image is converted
to 24bpp. The alpha channel will be used internally by the filter.
Gets or sets a value indicating whether to blend using a linear
gradient or just superimpose the two images with equal weights.
true to create a gradient; otherwise, false. Default is true.
Gets or sets a value indicating whether only the alpha channel
should be blended. This can be used together with a transparency
mask to selectively blend only portions of the image.
true to blend only the alpha channel; otherwise, false. Default is false.
Constructs a new Blend filter.
The homography matrix mapping a second image to the overlay image.
The overlay image (also called the anchor).
Constructs a new Blend filter.
The overlay image (also called the anchor).
Constructs a new Blend filter.
The homography matrix mapping a second image to the overlay image.
The overlay image (also called the anchor).
Computes the new image size.
Process the image filter.
Computes a distance metric used to compute the blending mask
Concatenation filter.
Concatenates two images side by side in a single image.
Format translations dictionary.
Creates a new concatenation filter.
The first image to concatenate.
Calculates new image size.
Process the filter on the specified image.
Source image data.
Destination image data.
Filter to mark (highlight) rectangles in a image.
Color used to mark pairs.
Gets or sets the color used to fill
rectangles. Default is Transparent.
The set of rectangles.
The set of rectangles.
Format translations dictionary.
Initializes a new instance of the class.
The color to use to drawn the rectangles.
Initializes a new instance of the class.
Set of rectangles to be drawn.
Initializes a new instance of the class.
Set of rectangles to be drawn.
Initializes a new instance of the class.
Set of rectangles to be drawn.
The color to use to drawn the rectangles.
Applies the filter to the image.
Filter to mark (highlight) pairs of points in a image.
Color used to mark pairs.
The first set of points.
The corresponding points to the first set of points.
Format translations dictionary.
Initializes a new instance of the class.
Set of starting points.
Set of corresponding points.
Initializes a new instance of the class.
Set of starting points.
Set of corresponding points.
The color of the lines to be marked.
Process the filter on the specified image.
Source image data.
Filter to mark (highlight) points in a image.
The filter highlights points on the image using a given set of points.
The filter accepts 8 bpp grayscale, 24 and 32 bpp color images for processing.
Sample usage:
// Create a blob contour's instance
BlobCounter bc = new BlobCounter(image);
// Extract blobs
Blob[] blobs = bc.GetObjectsInformation();
bc.ExtractBlobsImage(bmp, blobs[0], true);
// Extract blob's edge points
List<IntPoint> contour = bc.GetBlobsEdgePoints(blobs[0]);
// Create a green, 2 pixel width points marker's instance
PointsMarker marker = new PointsMarker(contour, Color.Green, 2);
// Apply the filter in a given color image
marker.ApplyInPlace(colorBlob);
Format translations dictionary.
Color used to mark corners.
Gets or sets the set of points to mark.
Gets or sets the width of the points to be drawn.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Process the filter on the specified image.
Source image data.
Wavelet transform filter.
Bitmap image = ... // Lena's famous picture
// Create a new Haar Wavelet transform filter
var wavelet = new WaveletTransform(new Haar(1));
// Apply the Wavelet transformation
Bitmap result = wavelet.Apply(image);
// Show on the screen
ImageBox.Show(result);
The resulting image is shown below.
// Extract only one of the resulting images
var crop = new Crop(new Rectangle(0, 0,
image.Width / 2, image.Height / 2));
Bitmap quarter = crop.Apply(result);
// Show on the screen
ImageBox.Show(quarter);
The resulting image is shown below.
Constructs a new Wavelet Transform filter.
A wavelet function.
Constructs a new Wavelet Transform filter.
A wavelet function.
True to perform backward transform, false otherwise.
Format translations dictionary.
Gets or sets the Wavelet function
Gets or sets whether the filter should be applied forward or backwards.
Applies the filter to the image.
Information about PNM image's frame.
PNM file version (format), [1, 6].
Maximum pixel's value in source PNM image.
The value is used to scale image's data converting them
from original data range to the range of
supported bits per pixel format.
Initializes a new instance of the class.
Initializes a new instance of the class.
Image's width.
Image's height.
Number of bits per image's pixel.
Frame's index.
Total frames in the image.
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Information about FITS image's frame.
Original bits per pixel.
The property specifies original number of bits per image's pixel. For
FITS images the value may be equal to 8, 16, 32, -32 (32 bit image with float data
type for pixel encoding), -64 (64 bit image with double data type for pixel encoding).
Minimum data value found during parsing FITS image.
Minimum and maximum data values are used to scale image's data converting
them from original bits per pixel format to
supported bits per pixel format.
Maximum data value found during parsing FITS image.
Minimum and maximum data values are used to scale image's data converting
them from original bits per pixel format to
supported bits per pixel format.
Telescope used for object's observation.
Object acquired during observation.
Observer doing object's acquiring.
Instrument used for observation.
Initializes a new instance of the class.
Initializes a new instance of the class.
Image's width.
Image's height.
Number of bits per image's pixel.
Frame's index.
Total frames in the image.
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
FITS image format decoder.
The FITS (an acronym derived from "Flexible Image Transport System") format
is an astronomical image and table format created and supported by NASA. FITS is the most
commonly used in astronomy and is designed specifically for scientific data. Different astronomical
organizations keep their images acquired using telescopes and other equipment in FITS format.
The class extracts image frames only from the main data section of FITS file.
2D (single frame) and 3D (series of frames) data structures are supported.
During image reading/parsing, its data are scaled using minimum and maximum values of
the source image data. FITS tags are not used for this purpose - data are scaled from the
[min, max] range found to the range of supported image format ([0, 255] for 8 bpp grayscale
or [0, 65535] for 16 bpp grayscale image).
Decode first frame of FITS image.
Source stream, which contains encoded image.
Returns decoded image frame.
Not a FITS image format.
Format of the FITS image is not supported.
The stream contains invalid (broken) FITS image.
Open specified stream.
Stream to open.
Returns number of images found in the specified stream.
Not a FITS image format.
Format of the FITS image is not supported.
The stream contains invalid (broken) FITS image.
Decode specified frame.
Image frame to decode.
Receives information about decoded frame.
Returns decoded frame.
No image stream was opened previously.
Stream does not contain frame with specified index.
The stream contains invalid (broken) FITS image.
Close decoding of previously opened stream.
The method does not close stream itself, but just closes
decoding cleaning all associated data with it.
Common interface for image decoders. Image decoders can read images stored
in different formats (e.g. PNG, JPG, PNM,
FITS and transform them into .
The interface also defines methods to work with image formats designed to store
multiple frames and image formats which provide different type of image description
(like acquisition parameters, etc).
Decode first frame of image from the specified stream.
Source stream, which contains encoded image.
Returns decoded image frame.
For one-frame image formats the method is supposed to decode single
available frame. For multi-frame image formats the first frame should be
decoded.
Implementations of this method may throw
exception to report about unrecognized image
format, exception to report about incorrectly
formatted image or exception to report if
certain formats are not supported.
Open specified stream.
Stream to open.
Returns number of images found in the specified stream.
Implementation of this method is supposed to read image's header,
checking for correct image format and reading its atributes.
Implementations of this method may throw
exception to report about unrecognized image
format, exception to report about incorrectly
formatted image or exception to report if
certain formats are not supported.
Decode specified frame.
Image frame to decode.
Receives information about decoded frame.
Returns decoded frame.
Implementations of this method may throw
exception in the case if no image
stream was opened previously, in the
case if stream does not contain frame with specified index or
exception to report about incorrectly formatted image.
Close decoding of previously opened stream.
Implementations of this method don't close stream itself, but just close
decoding cleaning all associated data with it.
Information about image's frame.
This is a base class, which keeps basic information about image, like its width,
height, etc. Classes, which inherit from this, may define more properties describing certain
image formats.
Image's width.
Image's height.
Number of bits per image's pixel.
Frame's index.
Total frames in the image.
Image's width.
Image's height.
Number of bits per image's pixel.
Frame's index.
Some image formats support storing multiple frames in one image file.
The property specifies index of a particular frame.
Total frames in the image.
Some image formats support storing multiple frames in one image file.
The property specifies total number of frames in image file.
Initializes a new instance of the class.
Initializes a new instance of the class.
Image's width.
Image's height.
Number of bits per image's pixel.
Frame's index.
Total frames in the image.
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Image decoder to decode different custom image file formats.
The class represent a help class, which simplifies decoding of image
files finding appropriate image decoder automatically (using list of registered
image decoders). Instead of using required image decoder directly, users may use this
class, which will find required decoder by file's extension.
By default the class will query all referenced assemblies for types that are marked
with the . If the user would like to implement
a new decoder, all that is necessary is to mark a new class with the
and make it implement the interface.
If the class can not find the appropriate decoder, it will delegate
the file decoding to .NET's internal image decoders.
Obsolete. Please mark your decoder class with the instead.
Decode first frame for the specified file.
File name to read image from.
Return decoded image. In the case if file format support multiple
frames, the method return the first frame.
The method uses table of registered image decoders to find the one,
which should be used for the specified file. If there is not appropriate decoder
found, the method uses default .NET's image decoding routine (see
).
Decode first frame for the specified file.
File name to read image from.
Information about the decoded image.
Return decoded image. In the case if file format support multiple
frames, the method return the first frame.
The method uses table of registered image decoders to find the one,
which should be used for the specified file. If there is not appropriate decoder
found, the method uses default .NET's image decoding routine (see
).
PNM image format decoder.
The PNM (an acronym derived from "Portable Any Map") format is an
abstraction of the PBM, PGM and PPM formats. I.e. the name "PNM" refers collectively
to PBM (binary images), PGM (grayscale images) and PPM (color image) image formats.
Image in PNM format can be found in different scientific databases and laboratories,
for example Yale Face Database and AT&T Face Database.
Only PNM images of P2 (ascii encoded PGM), P3 (ascii encoded ppm), P5
(binary encoded PGM) and P6 (binary encoded PPM) formats are supported at this point.
The maximum supported pixel value is 255 at this point.
The class supports only one-frame PNM images. As it is specified in format
specification, the multi-frame PNM images has appeared starting from 2000.
Decode first frame of PNM image.
Source stream, which contains encoded image.
Returns decoded image frame.
Not a PNM image format.
Format of the PNM image is not supported.
The stream contains invalid (broken) PNM image.
Open specified stream.
Stream to open.
Returns number of images found in the specified stream.
Not a PNM image format.
Format of the PNM image is not supported.
The stream contains invalid (broken) PNM image.
Decode specified frame.
Image frame to decode.
Receives information about decoded frame.
Returns decoded frame.
No image stream was opened previously.
Stream does not contain frame with specified index.
The stream contains invalid (broken) PNM image.
Close decoding of previously opened stream.
The method does not close stream itself, but just closes
decoding cleaning all associated data with it.
Set of tools used internally in AForge.Imaging.Formats library.
Create and initialize new grayscale image.
Image width.
Image height.
Returns new created grayscale image.
Accord.Imaging.Image.CreateGrayscaleImage() function
could be used instead, which does the some. But it was not used to get
rid of dependency on AForge.Imaing library.
Read specified amount of bytes from the specified stream.
Source sream to read data from.
Buffer to read data into.
Offset in buffer to put data into.
Number of bytes to read.
Returns total number of bytes read. It may be smaller than requested amount only
in the case if end of stream was reached.
This tool function guarantees that requested number of bytes
was read from the source stream (.NET streams don't guarantee this and may return less bytes
than it was requested). Only in the case if end of stream was reached, the function
may return with less bytes read.
HSL components.
The class encapsulates HSL color components and can be used to implement
logic for reading, writing and converting to and from HSL color representations.
The following examples show how to convert to and from various pixel representations:
Hue component.
Hue is measured in the range of [0, 359].
Saturation component.
Saturation is measured in the range of [0, 1].
Luminance value.
Luminance is measured in the range of [0, 1].
Initializes a new instance of the class.
Hue component.
Saturation component.
Luminance component.
Convert from RGB to HSL color space.
Source color in RGB color space.
Destination color in HSL color space.
See HSL and HSV Wiki
for information about the algorithm to convert from RGB to HSL.
Convert from RGB to HSL color space.
Source color in RGB color space.
Returns instance, which represents converted color value.
Convert from HSL to RGB color space.
Source color in HSL color space.
Destination color in RGB color space.
Convert the color to RGB color space.
Returns instance, which represents converted color value.
Performs an explicit conversion from to .
The HSL color.
The result of the conversion.
Performs an explicit conversion from to .
The HSL color.
The result of the conversion.
RGB components.
The class encapsulates RGB color components and can be used to implement
logic for reading, writing and converting to and from RGB color representations.
The PixelFormat.Format24bppRgb
actually refers to a BGR pixel format.
The following examples show how to convert to and from various pixel representations:
Index of red component.
Index of green component.
Index of blue component.
Index of alpha component for ARGB images.
Red component.
Green component.
Blue component.
Alpha component.
Color value of the class.
Initializes a new instance of the class.
Red component.
Green component.
Blue component.
Initializes a new instance of the class.
Red component.
Green component.
Blue component.
Alpha component.
Initializes a new instance of the class.
Initialize from specified color.
Performs an explicit conversion from to .
The RGB color.
The result of the conversion.
Performs an explicit conversion from to .
The RGB color.
The result of the conversion.
YCbCr components.
The class encapsulates YCbCr color components and can be used to implement
logic for reading, writing and converting to and from YCbCr color representations.
The following examples show how to convert to and from various pixel representations:
Index of Y component.
Index of Cb component.
Index of Cr component.
Y component.
Cb component.
Cr component.
Initializes a new instance of the class.
Y component.
Cb component.
Cr component.
Convert from RGB to YCbCr color space (Rec 601-1 specification).
Source color in RGB color space.
Destination color in YCbCr color space.
Convert from RGB to YCbCr color space (Rec 601-1 specification).
Source color in RGB color space.
Returns instance, which represents converted color value.
Convert from YCbCr to RGB color space.
Source color in YCbCr color space.
Destination color in RGB color space.
Convert the color to RGB color space.
Returns instance, which represents converted color value.
Performs an explicit conversion from to .
The YCbCr color.
The result of the conversion.
Performs an explicit conversion from to .
The YCbCr color.
The result of the conversion.
Image's blob.
The class represents a blob - part of another images. The
class encapsulates the blob itself and information about its position
in parent image.
The class is not responsible for blob's image disposing, so it should be
done manually when it is required.
Blob's image.
The property keeps blob's image. In the case if it equals to null,
the image may be extracted using
or method.
Blob's image size.
The property specifies size of the blob's image.
If the property is set to , the blob's image size equals to the
size of original image. If the property is set to , the blob's
image size equals to size of actual blob.
Blob's rectangle in the original image.
The property specifies position of the blob in the original image
and its size.
Blob's ID in the original image.
Blob's area.
The property equals to blob's area measured in number of pixels
contained by the blob.
Blob's fullness, [0, 1].
The property equals to blob's fullness, which is calculated
as Area / ( Width * Height ). If it equals to 1, then
it means that entire blob's rectangle is filled by blob's pixel (no
blank areas), which is true only for rectangles. If it equals to 0.5,
for example, then it means that only half of the bounding rectangle is filled
by blob's pixels.
Blob's center of gravity point.
The property keeps center of gravity point, which is calculated as
mean value of X and Y coordinates of blob's points.
Blob's mean color.
The property keeps mean color of pixels comprising the blob.
Blob color's standard deviation.
The property keeps standard deviation of pixels' colors comprising the blob.
Initializes a new instance of the class.
Blob's ID in the original image.
Blob's rectangle in the original image.
This constructor leaves property not initialized. The blob's
image may be extracted later using
or method.
Initializes a new instance of the class.
Source blob to copy.
This copy constructor leaves property not initialized. The blob's
image may be extracted later using
or method.
Blob counter - counts objects in image, which are separated by black background.
The class counts and extracts stand alone objects in
images using connected components labeling algorithm.
The algorithm treats all pixels with values less or equal to
as background, but pixels with higher values are treated as objects' pixels.
For blobs' searching the class supports 8 bpp indexed grayscale images and
24/32 bpp color images that are at least two pixels wide. Images that are one
pixel wide can be processed if they are rotated first, or they can be processed
with .
See documentation about for information about which
pixel formats are supported for extraction of blobs.
Sample usage:
// create an instance of blob counter algorithm
BlobCounter bc = new BlobCounter();
// process binary image
bc.ProcessImage(image);
// process blobs
foreach (Rectangle rect in bc.GetObjectsRectangles())
{
// ...
}
Background threshold's value.
The property sets threshold value for distinguishing between background
pixel and objects' pixels. All pixel with values less or equal to this property are
treated as background, but pixels with higher values are treated as objects' pixels.
In the case of colour images a pixel is treated as objects' pixel if any of its
RGB values are higher than corresponding values of this threshold.
For processing grayscale image, set the property with all RGB components eqaul.
Default value is set to (0, 0, 0) - black colour.
Initializes a new instance of the class.
Creates new instance of the class with
an empty objects map. Before using methods, which provide information about blobs
or extract them, the ,
or
method should be called to collect objects map.
Initializes a new instance of the class.
Image to look for objects in.
Initializes a new instance of the class.
Image data to look for objects in.
Initializes a new instance of the class.
Unmanaged image to look for objects in.
Actual objects map building.
Unmanaged image to process.
The method supports 8 bpp indexed grayscale images and 24/32 bpp color images.
Unsupported pixel format of the source image.
Cannot process images that are one pixel wide. Rotate the image
or use .
Possible object orders.
The enumeration defines possible sorting orders of objects, found by blob
counting classes.
Unsorted order (as it is collected by algorithm).
Objects are sorted by size in descending order (bigger objects go first).
Size is calculated as Width * Height.
Objects are sorted by area in descending order (bigger objects go first).
Objects are sorted by Y coordinate, then by X coordinate in ascending order
(smaller coordinates go first).
Objects are sorted by X coordinate, then by Y coordinate in ascending order
(smaller coordinates go first).
Base class for different blob counting algorithms.
The class is abstract and serves as a base for different blob counting algorithms.
Classes, which inherit from this base class, require to implement
method, which does actual building of object's label's map.
For blobs' searcing usually all inherited classes accept binary images, which are actually
grayscale thresholded images. But the exact supported format should be checked in particular class,
inheriting from the base class. For blobs' extraction the class supports grayscale (8 bpp indexed)
and color images (24 and 32 bpp).
Sample usage:
// create an instance of a blob counter algorithm
BlobCounterBase bc = new BlobCounter();
// set filtering options
bc.FilterBlobs = true;
bc.MinWidth = 5;
bc.MinHeight = 5;
// process binary image
bc.ProcessImage(image);
// process blobs
foreach (Blob blob in bc.GetObjects(image, false))
{
// ...
// blob.Rectangle - blob's rectangle
// blob.Image - blob's image
}
Gets the width of the image.
Gets the height of the image.
Objects count.
Number of objects (blobs) found by method.
Objects' labels.
The array of width * height size, which holds
labels for all objects. Background is represented with 0 value,
but objects are represented with labels starting from 1.
Objects sort order.
The property specifies objects' sort order, which are provided
by , , etc.
Specifies if blobs should be filtered.
If the property is equal to false, then there is no any additional
post processing after image was processed. If the property is set to true, then
blobs filtering is done right after image processing routine. If
is set, then custom blobs' filtering is done, which is implemented by user. Otherwise
blobs are filtered according to dimensions specified in ,
, and properties.
Default value is set to .
Specifies if size filtering should be coupled or not.
In uncoupled filtering mode, objects are filtered out in the case if
their width is smaller than or height is smaller than
. But in coupled filtering mode, objects are filtered out in
the case if their width is smaller than and height is
smaller than . In both modes the idea with filtering by objects'
maximum size is the same as filtering by objects' minimum size.
Default value is set to , what means uncoupled filtering by size.
Minimum allowed width of blob.
The property specifies minimum object's width acceptable by blob counting
routine and has power only when property is set to
and custom blobs' filter is
set to .
See documentation to for additional information.
Minimum allowed height of blob.
The property specifies minimum object's height acceptable by blob counting
routine and has power only when property is set to
and custom blobs' filter is
set to .
See documentation to for additional information.
Maximum allowed width of blob.
The property specifies maximum object's width acceptable by blob counting
routine and has power only when property is set to
and custom blobs' filter is
set to .
See documentation to for additional information.
Maximum allowed height of blob.
The property specifies maximum object's height acceptable by blob counting
routine and has power only when property is set to
and custom blobs' filter is
set to .
See documentation to for additional information.
Custom blobs' filter to use.
The property specifies custom blobs' filtering routine to use. It has
effect only in the case if property is set to .
When custom blobs' filtering routine is set, it has priority over default filtering done
with , , and .
Initializes a new instance of the class.
Creates new instance of the class with
an empty objects map. Before using methods, which provide information about blobs
or extract them, the ,
or
method should be called to collect objects map.
Initializes a new instance of the class.
Binary image to look for objects in.
Creates new instance of the class with
initialized objects map built by calling method.
Initializes a new instance of the class.
Binary image data to look for objects in.
Creates new instance of the class with
initialized objects map built by calling method.
Initializes a new instance of the class.
Unmanaged binary image to look for objects in.
Creates new instance of the class with
initialized objects map built by calling method.
Build objects map.
Source binary image.
Processes the image and builds objects map, which is used later to extracts blobs.
Unsupported pixel format of the source image.
Build objects map.
Source binary image data.
Processes the image and builds objects map, which is used later to extracts blobs.
Unsupported pixel format of the source image.
Build object map from raw image data.
Source unmanaged binary image data.
Processes the image and builds objects map, which is used later to extracts blobs.
Unsupported pixel format of the source image.
Thrown by some inherited classes if some image property other
than the pixel format is not supported. See that class's documentation or the exception message for details.
Get objects' rectangles.
Returns array of objects' rectangles.
The method returns array of objects rectangles. Before calling the
method, the ,
or method should be called, which will
build objects map.
No image was processed before, so objects' rectangles
can not be collected.
Get objects' information.
Returns array of partially initialized blobs (without property initialized).
By the amount of provided information, the method is between and
methods. The method provides array of blobs without initialized their image.
Blob's image may be extracted later using
or method.
// create blob counter and process image
BlobCounter bc = new BlobCounter(sourceImage);
// specify sort order
bc.ObjectsOrder = ObjectsOrder.Size;
// get objects' information (blobs without image)
Blob[] blobs = bc.GetObjectInformation();
// process blobs
foreach (Blob blob in blobs)
{
// check blob's properties
if (blob.Rectangle.Width > 50)
{
// the blob looks interesting, let's extract it
bc.ExtractBlobsImage(sourceImage, blob);
}
}
No image was processed before, so objects' information
can not be collected.
Get blobs.
Source image to extract objects from.
Returns array of blobs.
Specifies size of blobs' image to extract.
If set to each blobs' image will have the same size as
the specified image. If set to each blobs' image will
have the size of its blob.
The method returns array of blobs. Before calling the
method, the ,
or method should be called, which will build
objects map.
The method supports 24/32 bpp color and 8 bpp indexed grayscale images.
Unsupported pixel format of the provided image.
No image was processed before, so objects
can not be collected.
Get blobs.
Source unmanaged image to extract objects from.
Specifies size of blobs' image to extract.
If set to each blobs' image will have the same size as
the specified image. If set to each blobs' image will
have the size of its blob.
Returns array of blobs.
The method returns array of blobs. Before calling the
method, the ,
or method should be called, which will build
objects map.
The method supports 24/32 bpp color and 8 bpp indexed grayscale images.
Unsupported pixel format of the provided image.
No image was processed before, so objects
can not be collected.
Extract blob's image.
Source image to extract blob's image from.
Blob which is required to be extracted.
Specifies size of blobs' image to extract.
If set to each blobs' image will have the same size as
the specified image. If set to each blobs' image will
have the size of its blob.
The method is used to extract image of partially initialized blob, which
was provided by method. Before calling the
method, the ,
or method should be called, which will build
objects map.
The method supports 24/32 bpp color and 8 bpp indexed grayscale images.
Unsupported pixel format of the provided image.
No image was processed before, so blob
can not be extracted.
Extract blob's image.
Source unmanaged image to extract blob's image from.
Blob which is required to be extracted.
Specifies size of blobs' image to extract.
If set to each blobs' image will have the same size as
the specified image. If set to each blobs' image will
have the size of its blob.
The method is used to extract image of partially initialized blob, which
was provided by method. Before calling the
method, the ,
or method should be called, which will build
objects map.
The method supports 24/32 bpp color and 8 bpp indexed grayscale images.
Unsupported pixel format of the provided image.
No image was processed before, so blob
can not be extracted.
Get list of points on the left and right edges of the blob.
Blob to collect edge points for.
List of points on the left edge of the blob.
List of points on the right edge of the blob.
The method scans each line of the blob and finds the most left and the
most right points for it adding them to appropriate lists. The method may be very
useful in conjunction with different routines from ,
which allow finding convex hull or quadrilateral's corners.
Both lists of points are sorted by Y coordinate - points with smaller Y
value go first.
No image was processed before, so blob
can not be extracted.
Get list of points on the top and bottom edges of the blob.
Blob to collect edge points for.
List of points on the top edge of the blob.
List of points on the bottom edge of the blob.
The method scans each column of the blob and finds the most top and the
most bottom points for it adding them to appropriate lists. The method may be very
useful in conjunction with different routines from ,
which allow finding convex hull or quadrilateral's corners.
Both lists of points are sorted by X coordinate - points with smaller X
value go first.
No image was processed before, so blob
can not be extracted.
Get list of object's edge points.
Blob to collect edge points for.
Returns unsorted list of blob's edge points.
The method scans each row and column of the blob and finds the
most top/bottom/left/right points. The method returns similar result as if results of
both and
methods were combined, but each edge point occurs only once in the list.
Edge points in the returned list are not ordered. This makes the list unusable
for visualization with methods, which draw polygon or poly-line. But the returned list
can be used with such algorithms, like convex hull search, shape analyzer, etc.
No image was processed before, so blob
can not be extracted.
Actual objects map building.
Unmanaged image to process.
By the time this method is called bitmap's pixel format is not
yet checked, so this should be done by the class inheriting from the base class.
and members are initialized
before the method is called, so these members may be used safely.
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Block match class keeps information about found block match. The class is
used with block matching algorithms implementing
interface.
Reference point in source image.
Match point in search image (point of a found match).
Similarity between blocks in source and search images, [0..1].
Initializes a new instance of the class.
Reference point in source image.
Match point in search image (point of a found match).
Similarity between blocks in source and search images, [0..1].
Color dithering using Burkes error diffusion.
The image processing routine represents color dithering algorithm, which is based on
error diffusion dithering with Burkes coefficients. Error is diffused
on 7 neighbor pixels with next coefficients:
| * | 8 | 4 |
| 2 | 4 | 8 | 4 | 2 |
/ 32
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified
color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Sample usage:
// create color image quantization routine
ColorImageQuantizer ciq = new ColorImageQuantizer( new MedianCutQuantizer( ) );
// create 8 colors table
Color[] colorTable = ciq.CalculatePalette( image, 8 );
// create dithering routine
BurkesColorDithering dithering = new BurkesColorDithering( );
dithering.ColorTable = colorTable;
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Initial image:
Result image:
Initializes a new instance of the class.
Base class for error diffusion color dithering, where error is diffused to
adjacent neighbor pixels.
The class does error diffusion to adjacent neighbor pixels
using specified set of coefficients. These coefficients are represented by
2 dimensional jugged array, where first array of coefficients is for
right-standing pixels, but the rest of arrays are for bottom-standing pixels.
All arrays except the first one should have odd number of coefficients.
Suppose that error diffusion coefficients are represented by the next
jugged array:
int[][] coefficients = new int[2][] {
new int[1] { 7 },
new int[3] { 3, 5, 1 }
};
The above coefficients are used to diffuse error over the next neighbor
pixels (* marks current pixel, coefficients are placed to corresponding
neighbor pixels):
| * | 7 |
| 3 | 5 | 1 |
/ 16
The image processing routine accepts 24/32 bpp color images for processing.
Sample usage:
// create dithering routine
ColorErrorDiffusionToAdjacentNeighbors dithering = new ColorErrorDiffusionToAdjacentNeighbors(
new int[3][] {
new int[2] { 5, 3 },
new int[5] { 2, 4, 5, 4, 2 },
new int[3] { 2, 3, 2 }
} );
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Diffusion coefficients.
Set of coefficients, which are used for error diffusion to
pixel's neighbors.
Initializes a new instance of the class.
Diffusion coefficients (see
for more information).
Do error diffusion.
Error value of red component.
Error value of green component.
Error value of blue component.
Pointer to current processing pixel.
All parameters of the image and current processing pixel's coordinates
are initialized by base class.
Color quantization tools.
The class contains methods aimed to simplify work with color quantization
algorithms implementing interface. Using its methods it is possible
to calculate reduced color palette for the specified image or reduce colors to the specified number.
Sample usage:
// instantiate the images' color quantization class
ColorImageQuantizer ciq = new ColorImageQuantizer( new MedianCutQuantizer( ) );
// get 16 color palette for a given image
Color[] colorTable = ciq.CalculatePalette( image, 16 );
// ... or just reduce colors in the specified image
Bitmap newImage = ciq.ReduceColors( image, 16 );
Initial image:
Result image:
Color quantization algorithm used by this class to build color palettes for the specified images.
Use color caching during color reduction or not.
The property has effect only for methods like and
specifies if internal cache of already processed colors should be used or not. For each pixel in the original
image the color reduction routine does search in target color palette to find the best matching color.
To avoid doing the search again and again for already processed colors, the class may use internal dictionary
which maps colors of original image to indexes in target color palette.
The property provides a trade off. On one hand it may speedup color reduction routine, but on another
hand it increases memory usage. Also cache usage may not be efficient for very small target color tables.
Default value is set to .
Initializes a new instance of the class.
Color quantization algorithm to use for processing images.
Calculate reduced color palette for the specified image.
Image to calculate palette for.
Palette size to calculate.
Return reduced color palette for the specified image.
See for details.
Calculate reduced color palette for the specified image.
Image to calculate palette for.
Palette size to calculate.
Return reduced color palette for the specified image.
The method processes the specified image and feeds color value of each pixel
to the specified color quantization algorithm. Finally it returns color palette built by
that algorithm.
Unsupported format of the source image - it must 24 or 32 bpp color image.
Create an image with reduced number of colors.
Source image to process.
Number of colors to get in the output image, [2, 256].
Returns image with reduced number of colors.
See for details.
Create an image with reduced number of colors.
Source image to process.
Number of colors to get in the output image, [2, 256].
Returns image with reduced number of colors.
The method creates an image, which looks similar to the specified image, but contains
reduced number of colors. First, target color palette is calculated using
method and then a new image is created, where pixels from the given source image are substituted by
best matching colors from calculated color table.
The output image has 4 bpp or 8 bpp indexed pixel format depending on the target palette size -
4 bpp for palette size 16 or less; 8 bpp otherwise.
Unsupported format of the source image - it must 24 or 32 bpp color image.
Invalid size of the target color palette.
Create an image with reduced number of colors using the specified palette.
Source image to process.
Target color palette. Must contatin 2-256 colors.
Returns image with reduced number of colors.
See for details.
Create an image with reduced number of colors using the specified palette.
Source image to process.
Target color palette. Must contatin 2-256 colors.
Returns image with reduced number of colors.
The method creates an image, which looks similar to the specified image, but contains
reduced number of colors. Is substitutes every pixel of the source image with the closest matching color
in the specified paletter.
The output image has 4 bpp or 8 bpp indexed pixel format depending on the target palette size -
4 bpp for palette size 16 or less; 8 bpp otherwise.
Unsupported format of the source image - it must 24 or 32 bpp color image.
Invalid size of the target color palette.
Base class for error diffusion color dithering.
The class is the base class for color dithering algorithms based on
error diffusion.
Color dithering with error diffusion is based on the idea that each pixel from the specified source
image is substituted with a best matching color (or better say with color's index) from the specified color
table. However, the error (difference between color value in the source image and the best matching color)
is diffused to neighbor pixels of the source image, which affects the way those pixels are substituted by colors
from the specified table.
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Current processing X coordinate.
Current processing Y coordinate.
Processing image's width.
Processing image's height.
Processing image's stride (line size).
Processing image's pixel size in bytes.
Color table to use for image dithering. Must contain 2-256 colors.
Color table size determines format of the resulting image produced by this
image processing routine. If color table contains 16 color or less, then result image will have
4 bpp indexed pixel format. If color table contains more than 16 colors, then result image will
have 8 bpp indexed pixel format.
By default the property is initialized with default 16 colors, which are:
Black, Dark Blue, Dark Green, Dark Cyan, Dark Red, Dark Magenta, Dark Khaki, Light Gray,
Gray, Blue, Green, Cyan, Red, Magenta, Yellow and White.
Color table length must be in the [2, 256] range.
Use color caching during color dithering or not.
The property specifies if internal cache of already processed colors should be used or not.
For each pixel in the original image the color dithering routine does search in target color palette to find
the best matching color. To avoid doing the search again and again for already processed colors, the class may
use internal dictionary which maps colors of original image to indexes in target color palette.
The property provides a trade off. On one hand it may speedup color dithering routine, but on another
hand it increases memory usage. Also cache usage may not be efficient for very small target color tables.
Default value is set to .
Initializes a new instance of the class.
Do error diffusion.
Error value of red component.
Error value of green component.
Error value of blue component.
Pointer to current processing pixel.
All parameters of the image and current processing pixel's coordinates
are initialized in protected members.
Perform color dithering for the specified image.
Source image to do color dithering for.
Returns color dithered image. See for information about format of
the result image.
Unsupported pixel format of the source image. It must 24 or 32 bpp color image.
Perform color dithering for the specified image.
Source image to do color dithering for.
Returns color dithered image. See for information about format of
the result image.
Unsupported pixel format of the source image. It must 24 or 32 bpp color image.
Color dithering using Floyd-Steinberg error diffusion.
The image processing routine represents color dithering algorithm, which is based on
error diffusion dithering with Floyd-Steinberg
coefficients. Error is diffused on 4 neighbor pixels with the next coefficients:
| * | 7 |
| 3 | 5 | 1 |
/ 16
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified
color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Sample usage:
// create color image quantization routine
ColorImageQuantizer ciq = new ColorImageQuantizer( new MedianCutQuantizer( ) );
// create 16 colors table
Color[] colorTable = ciq.CalculatePalette( image, 16 );
// create dithering routine
FloydSteinbergColorDithering dithering = new FloydSteinbergColorDithering( );
dithering.ColorTable = colorTable;
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Initial image:
Result image:
Initializes a new instance of the class.
Interface which is implemented by different color quantization algorithms.
The interface defines set of methods, which are to be implemented by different
color quantization algorithms - algorithms which are aimed to provide reduced color table/palette
for a color image.
See documentation to particular implementation of the interface for additional information
about the algorithm.
Process color by a color quantization algorithm.
Color to process.
Depending on particular implementation of interface,
this method may simply process the specified color or store it in internal list for
later color palette calculation.
Get palette of the specified size.
Palette size to return.
Returns reduced color palette for the accumulated/processed colors.
The method must be called after continuously calling method and
returns reduced color palette for colors accumulated/processed so far.
Clear internals of the algorithm, like accumulated color table, etc.
The methods resets internal state of a color quantization algorithm returning
it to initial state.
Color dithering using Jarvis, Judice and Ninke error diffusion.
The image processing routine represents color dithering algorithm, which is based on
error diffusion dithering with Jarvis-Judice-Ninke coefficients. Error is diffused
on 12 neighbor pixels with next coefficients:
| * | 7 | 5 |
| 3 | 5 | 7 | 5 | 3 |
| 1 | 3 | 5 | 3 | 1 |
/ 48
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified
color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Sample usage:
// create color image quantization routine
ColorImageQuantizer ciq = new ColorImageQuantizer( new MedianCutQuantizer( ) );
// create 32 colors table
Color[] colorTable = ciq.CalculatePalette( image, 32 );
// create dithering routine
JarvisJudiceNinkeColorDithering dithering = new JarvisJudiceNinkeColorDithering( );
dithering.ColorTable = colorTable;
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Initial image:
Result image:
Initializes a new instance of the class.
Median cut color quantization algorithm.
The class implements median cut
color quantization algorithm.
See also class, which may simplify processing of images.
Sample usage:
// create the color quantization algorithm
IColorQuantizer quantizer = new MedianCutQuantizer( );
// process colors (taken from image for example)
for ( int i = 0; i < pixelsToProcess; i++ )
{
quantizer.AddColor( /* pixel color */ );
}
// get palette reduced to 16 colors
Color[] palette = quantizer.GetPalette( 16 );
Add color to the list of processed colors.
Color to add to the internal list.
The method adds the specified color into internal list of processed colors. The list
is used later by method to build reduced color table of the specified size.
Get paletter of the specified size.
Palette size to get.
Returns reduced palette of the specified size, which covers colors processed so far.
The method must be called after continuously calling method and
returns reduced color palette for colors accumulated/processed so far.
Clear internal state of the color quantization algorithm by clearing the list of colors
so far processed.
Color dithering with a thresold matrix (ordered dithering).
The class implements ordered color dithering as described on
Wikipedia.
The algorithm achieves dithering by applying a threshold map on
the pixels displayed, causing some of the pixels to be rendered at a different color, depending on
how far in between the color is of available color entries.
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified
color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Sample usage:
// create color image quantization routine
ColorImageQuantizer ciq = new ColorImageQuantizer( new MedianCutQuantizer( ) );
// create 256 colors table
Color[] colorTable = ciq.CalculatePalette( image, 256 );
// create dithering routine
OrderedColorDithering dithering = new OrderedColorDithering( );
dithering.ColorTable = colorTable;
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Initial image:
Result image:
Threshold matrix - values to add source image's values.
The property keeps a threshold matrix, which is applied to values of a source image
to dither. By adding these values to the source image the algorithm produces the effect when pixels
of the same color in source image may have different color in the result image (which depends on pixel's
position). This threshold map is also known as an index matrix or Bayer matrix.
By default the property is inialized with the below matrix:
2 18 6 22
26 10 30 14
8 24 4 20
32 16 28 12
Color table to use for image dithering. Must contain 2-256 colors.
Color table size determines format of the resulting image produced by this
image processing routine. If color table contains 16 color or less, then result image will have
4 bpp indexed pixel format. If color table contains more than 16 colors, then result image will
have 8 bpp indexed pixel format.
By default the property is initialized with default 16 colors, which are:
Black, Dark Blue, Dark Green, Dark Cyan, Dark Red, Dark Magenta, Dark Khaki, Light Gray,
Gray, Blue, Green, Cyan, Red, Magenta, Yellow and White.
Color table length must be in the [2, 256] range.
Use color caching during color dithering or not.
The property specifies if internal cache of already processed colors should be used or not.
For each pixel in the original image the color dithering routine does search in target color palette to find
the best matching color. To avoid doing the search again and again for already processed colors, the class may
use internal dictionary which maps colors of original image to indexes in target color palette.
The property provides a trade off. On one hand it may speedup color dithering routine, but on another
hand it increases memory usage. Also cache usage may not be efficient for very small target color tables.
Default value is set to .
Initializes a new instance of the class.
Initializes a new instance of the class.
Threshold matrix (see property).
Perform color dithering for the specified image.
Source image to do color dithering for.
Returns color dithered image. See for information about format of
the result image.
Unsupported pixel format of the source image. It must 24 or 32 bpp color image.
Perform color dithering for the specified image.
Source image to do color dithering for.
Returns color dithered image. See for information about format of
the result image.
Unsupported pixel format of the source image. It must 24 or 32 bpp color image.
Color dithering using Sierra error diffusion.
The image processing routine represents color dithering algorithm, which is based on
error diffusion dithering with Sierra coefficients. Error is diffused
on 10 neighbor pixels with next coefficients:
| * | 5 | 3 |
| 2 | 4 | 5 | 4 | 2 |
| 2 | 3 | 2 |
/ 32
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified
color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Sample usage:
// create dithering routine (use default color table)
SierraColorDithering dithering = new SierraColorDithering( );
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Initial image:
Result image:
Initializes a new instance of the class.
Color dithering using Stucki error diffusion.
The image processing routine represents color dithering algorithm, which is based on
error diffusion dithering with Stucki coefficients. Error is diffused
on 12 neighbor pixels with next coefficients:
| * | 8 | 4 |
| 2 | 4 | 8 | 4 | 2 |
| 1 | 2 | 4 | 2 | 1 |
/ 42
The image processing routine accepts 24/32 bpp color images for processing. As a result this routine
produces 4 bpp or 8 bpp indexed image, which depends on size of the specified
color table - 4 bpp result for
color tables with 16 colors or less; 8 bpp result for larger color tables.
Sample usage:
// create color image quantization routine
ColorImageQuantizer ciq = new ColorImageQuantizer( new MedianCutQuantizer( ) );
// create 64 colors table
Color[] colorTable = ciq.CalculatePalette( image, 64 );
// create dithering routine
StuckiColorDithering dithering = new StuckiColorDithering( );
dithering.ColorTable = colorTable;
// apply the dithering routine
Bitmap newImage = dithering.Apply( image );
Initial image:
Result image:
Initializes a new instance of the class.
Filtering of frequencies outside of specified range in complex Fourier
transformed image.
The filer keeps only specified range of frequencies in complex
Fourier transformed image. The rest of frequencies are zeroed.
Sample usage:
// create complex image
ComplexImage complexImage = ComplexImage.FromBitmap( image );
// do forward Fourier transformation
complexImage.ForwardFourierTransform( );
// create filter
FrequencyFilter filter = new FrequencyFilter( new IntRange( 20, 128 ) );
// apply filter
filter.Apply( complexImage );
// do backward Fourier transformation
complexImage.BackwardFourierTransform( );
// get complex image as bitmat
Bitmap fourierImage = complexImage.ToBitmap( );
Initial image:
Fourier image:
Range of frequencies to keep.
The range specifies the range of frequencies to keep. Values is frequencies
outside of this range are zeroed.
Default value is set to [0, 1024].
Initializes a new instance of the class.
Initializes a new instance of the class.
Range of frequencies to keep.
Apply filter to complex image.
Complex image to apply filter to.
The source complex image should be Fourier transformed.
Image processing filter, which operates with Fourier transformed
complex image.
The interface defines the set of methods, which should be
provided by all image processing filter, which operate with Fourier
transformed complex image.
Apply filter to complex image.
Complex image to apply filter to.
Complex image.
The class is used to keep image represented in complex numbers sutable for Fourier
transformations.
Sample usage:
// create complex image
ComplexImage complexImage = ComplexImage.FromBitmap( image );
// do forward Fourier transformation
complexImage.ForwardFourierTransform( );
// get complex image as bitmat
Bitmap fourierImage = complexImage.ToBitmap( );
Initial image:
Fourier image:
Image width.
Image height.
Status of the image - Fourier transformed or not.
Complex image's data.
Return's 2D array of [height, width] size, which keeps image's
complex data.
Initializes a new instance of the class.
Image width.
Image height.
The constractor is protected, what makes it imposible to instantiate this
class directly. To create an instance of this class or
method should be used.
Clone the complex image.
Returns copy of the complex image.
Create complex image from grayscale bitmap.
Source grayscale bitmap (8 bpp indexed).
Returns an instance of complex image.
The source image has incorrect pixel format.
Image width and height should be power of 2.
Create complex image from grayscale bitmap.
Source image data (8 bpp indexed).
Returns an instance of complex image.
The source image has incorrect pixel format.
Image width and height should be power of 2.
Convert complex image to bitmap.
Returns grayscale bitmap.
Applies forward fast Fourier transformation to the complex image.
Applies backward fast Fourier transformation to the complex image.
Skew angle checker for scanned documents.
The class implements document's skew checking algorithm, which is based
on Hough line transformation. The algorithm
is based on searching for text base lines - black line of text bottoms' followed
by white line below.
The routine supposes that a white-background document is provided
with black letters. The algorithm is not supposed for any type of objects, but for
document images with text.
The range of angles to detect is controlled by property.
The filter accepts 8 bpp grayscale images for processing.
Sample usage:
// create instance of skew checker
DocumentSkewChecker skewChecker = new DocumentSkewChecker( );
// get documents skew angle
double angle = skewChecker.GetSkewAngle( documentImage );
// create rotation filter
RotateBilinear rotationFilter = new RotateBilinear( -angle );
rotationFilter.FillColor = Color.White;
// rotate image applying the filter
Bitmap rotatedImage = rotationFilter.Apply( documentImage );
Initial image:
Deskewed image:
Steps per degree, [1, 10].
The value defines quality of Hough transform and its ability to detect
line slope precisely.
Default value is set to 1.
Maximum skew angle to detect, [0, 45] degrees.
The value sets maximum document's skew angle to detect.
Document's skew angle can be as positive (rotated counter clockwise), as negative
(rotated clockwise). So setting this value to 25, for example, will lead to
[-25, 25] degrees detection range.
Scanned documents usually have skew in the [-20, 20] degrees range.
Default value is set to 30.
Minimum angle to detect skew in degrees.
The property is deprecated and setting it has not any effect.
Use property instead.
Maximum angle to detect skew in degrees.
The property is deprecated and setting it has not any effect.
Use property instead.
Radius for searching local peak value, [1, 10].
The value determines radius around a map's value, which is analyzed to determine
if the map's value is a local maximum in specified area.
Default value is set to 4.
Initializes a new instance of the class.
Get skew angle of the provided document image.
Document's image to get skew angle of.
Returns document's skew angle. If the returned angle equals to -90,
then document skew detection has failed.
Unsupported pixel format of the source image.
Get skew angle of the provided document image.
Document's image to get skew angle of.
Image's rectangle to process (used to exclude processing of
regions, which are not relevant to skew detection).
Returns document's skew angle. If the returned angle equals to -90,
then document skew detection has failed.
Unsupported pixel format of the source image.
Get skew angle of the provided document image.
Document's image data to get skew angle of.
Returns document's skew angle. If the returned angle equals to -90,
then document skew detection has failed.
Unsupported pixel format of the source image.
Get skew angle of the provided document image.
Document's image data to get skew angle of.
Image's rectangle to process (used to exclude processing of
regions, which are not relevant to skew detection).
Returns document's skew angle. If the returned angle equals to -90,
then document skew detection has failed.
Unsupported pixel format of the source image.
Get skew angle of the provided document image.
Document's unmanaged image to get skew angle of.
Returns document's skew angle. If the returned angle equals to -90,
then document skew detection has failed.
Unsupported pixel format of the source image.
Get skew angle of the provided document image.
Document's unmanaged image to get skew angle of.
Image's rectangle to process (used to exclude processing of
regions, which are not relevant to skew detection).
Returns document's skew angle. If the returned angle equals to -90,
then document skew detection has failed.
Unsupported pixel format of the source image.
Drawing primitives.
The class allows to do drawing of some primitives directly on
locked image data or unmanaged image.
All methods of this class support drawing only on color 24/32 bpp images and
on grayscale 8 bpp indexed images.
When it comes to alpha blending for 24/32 bpp images, all calculations are done
as described on Wikipeadia
(see "over" operator).
Fill rectangle on the specified image.
Source image data to draw on.
Rectangle's coordinates to fill.
Rectangle's color.
The source image has incorrect pixel format.
Fill rectangle on the specified image.
Source image to draw on.
Rectangle's coordinates to fill.
Rectangle's color.
The source image has incorrect pixel format.
Draw rectangle on the specified image.
Source image data to draw on.
Rectangle's coordinates to draw.
Rectangle's color.
The source image has incorrect pixel format.
Draw rectangle on the specified image.
Source image to draw on.
Rectangle's coordinates to draw.
Rectangle's color.
The source image has incorrect pixel format.
Draw a line on the specified image.
Source image data to draw on.
The first point to connect.
The second point to connect.
Line's color.
The source image has incorrect pixel format.
Draw a line on the specified image.
Source image to draw on.
The first point to connect.
The second point to connect.
Line's color.
The source image has incorrect pixel format.
Draw a polygon on the specified image.
Source image data to draw on.
Points of the polygon to draw.
Polygon's color.
The method draws a polygon by connecting all points from the
first one to the last one and then connecting the last point with the first one.
Draw a polygon on the specified image.
Source image to draw on.
Points of the polygon to draw.
Polygon's color.
The method draws a polygon by connecting all points from the
first one to the last one and then connecting the last point with the first one.
Draw a polyline on the specified image.
Source image data to draw on.
Points of the polyline to draw.
polyline's color.
The method draws a polyline by connecting all points from the
first one to the last one. Unlike
method, this method does not connect the last point with the first one.
Draw a polyline on the specified image.
Source image to draw on.
Points of the polyline to draw.
polyline's color.
The method draws a polyline by connecting all points from the
first one to the last one. Unlike
method, this method does not connect the last point with the first one.
Unsupported image format exception.
The unsupported image format exception is thrown in the case when
user passes an image of certain format to an image processing routine, which does
not support the format. Check documentation of the image processing routine
to discover which formats are supported by the routine.
Initializes a new instance of the class.
Initializes a new instance of the class.
Message providing some additional information.
Initializes a new instance of the class.
Message providing some additional information.
Name of the invalid parameter.
Invalid image properties exception.
The invalid image properties exception is thrown in the case when
user provides an image with certain properties, which are treated as invalid by
particular image processing routine. Another case when this exception is
thrown is the case when user tries to access some properties of an image (or
of a recently processed image by some routine), which are not valid for that image.
Initializes a new instance of the class.
Initializes a new instance of the class.
Message providing some additional information.
Initializes a new instance of the class.
Message providing some additional information.
Name of the invalid parameter.
Block matching implementation with the exhaustive search algorithm.
The class implements exhaustive search block matching algorithm
(see documentation for for information about
block matching algorithms). Exhaustive search algorithm tests each possible
location of block within search window trying to find a match with minimal
difference.
Because of the exhaustive nature of the algorithm, high performance
should not be expected in the case if big number of reference points is provided
or big block size and search radius are specified. Minimizing theses values increases
performance. But too small block size and search radius may affect quality.
The class processes only grayscale (8 bpp indexed) and color (24 bpp) images.
Sample usage:
// collect reference points using corners detector (for example)
SusanCornersDetector scd = new SusanCornersDetector( 30, 18 );
List<IntPoint> points = scd.ProcessImage( sourceImage );
// create block matching algorithm's instance
ExhaustiveBlockMatching bm = new ExhaustiveBlockMatching( 8, 12 );
// process images searching for block matchings
List<BlockMatch> matches = bm.ProcessImage( sourceImage, points, searchImage );
// draw displacement vectors
BitmapData data = sourceImage.LockBits(
new Rectangle( 0, 0, sourceImage.Width, sourceImage.Height ),
ImageLockMode.ReadWrite, sourceImage.PixelFormat );
foreach ( BlockMatch match in matches )
{
// highlight the original point in source image
Drawing.FillRectangle( data,
new Rectangle( match.SourcePoint.X - 1, match.SourcePoint.Y - 1, 3, 3 ),
Color.Yellow );
// draw line to the point in search image
Drawing.Line( data, match.SourcePoint, match.MatchPoint, Color.Red );
// check similarity
if ( match.Similarity > 0.98f )
{
// process block with high similarity somehow special
}
}
sourceImage.UnlockBits( data );
Test image 1 (source):
Test image 2 (search):
Result image:
Search radius.
The value specifies the shift from reference point in all
four directions, used to search for the best matching block.
Default value is set to 12.
Block size to search for.
The value specifies block size to search for. For each provided
reference pointer, a square block of this size is taken from the source image
(reference point becomes the coordinate of block's center) and the best match
is searched in second image within specified search
radius.
Default value is set to 16.
Similarity threshold, [0..1].
The property sets the minimal acceptable similarity between blocks
in source and search images. If similarity is lower than this value,
then the candidate block in search image is not treated as a match for the block
in source image.
Default value is set to 0.9.
Initializes a new instance of the class.
Initializes a new instance of the class.
Block size to search for.
Search radius.
Process images matching blocks between hem.
Source image with reference points.
List of reference points to be matched.
Image in which the reference points will be looked for.
Returns list of found block matches. The list is sorted by similarity
of found matches in descending order.
Source and search images sizes must match.
Source images can be grayscale (8 bpp indexed) or color (24 bpp) image only.
Source and search images must have same pixel format.
Process images matching blocks between them.
Source image with reference points.
List of reference points to be matched.
Image in which the reference points will be looked for.
Returns list of found block matches. The list is sorted by similarity
of found matches in descending order.
Source and search images sizes must match.
Source images can be grayscale (8 bpp indexed) or color (24 bpp) image only.
Source and search images must have same pixel format.
Process images matching blocks between them.
Source unmanaged image with reference points.
List of reference points to be matched.
Unmanaged image in which the reference points will be looked for.
Returns list of found block matches. The list is sorted by similarity
of found matches in descending order.
Source and search images sizes must match.
Source images can be grayscale (8 bpp indexed) or color (24 bpp) image only.
Source and search images must have same pixel format.
Exhaustive template matching.
The class implements exhaustive template matching algorithm,
which performs complete scan of source image, comparing each pixel with corresponding
pixel of template.
The class processes only grayscale 8 bpp and color 24 bpp images.
Sample usage:
// create template matching algorithm's instance
var tm = new ExhaustiveTemplateMatching(0.9f);
// find all matchings with specified above similarity
TemplateMatch[] matchings = tm.ProcessImage(sourceImage, templateImage);
// highlight found matchings
BitmapData data = sourceImage.LockBits(ImageLockMode.ReadWrite);
foreach (TemplateMatch m in matchings)
{
Drawing.Rectangle(data, m.Rectangle, Color.White);
// do something else with the matching
}
sourceImage.UnlockBits(data);
The class also can be used to get similarity level between two image of the same
size, which can be useful to get information about how different/similar are images:
// create template matching algorithm's instance
// use zero similarity to make sure algorithm will provide anything
ExhaustiveTemplateMatching tm = new ExhaustiveTemplateMatching(0);
// compare two images
TemplateMatch[] matchings = tm.ProcessImage(image1, image2);
// check similarity level
if (matchings[0].Similarity > 0.95f)
{
// do something with quite similar images
}
Similarity threshold, [0..1].
The property sets the minimal acceptable similarity between template
and potential found candidate. If similarity is lower than this value,
then object is not treated as matching with template.
Default value is set to 0.9.
Initializes a new instance of the class.
Initializes a new instance of the class.
Similarity threshold.
Process image looking for matchings with specified template.
Source image to process.
Template image to search for.
Returns array of found template matches. The array is sorted by similarity
of found matches in descending order.
The source image has incorrect pixel format.
Template image is bigger than source image.
Process image looking for matchings with specified template.
Source image to process.
Template image to search for.
Rectangle in source image to search template for.
Returns array of found template matches. The array is sorted by similarity
of found matches in descending order.
The source image has incorrect pixel format.
Template image is bigger than source image.
Process image looking for matchings with specified template.
Source image data to process.
Template image to search for.
Returns array of found template matches. The array is sorted by similarity
of found matches in descending order.
The source image has incorrect pixel format.
Template image is bigger than source image.
Process image looking for matchings with specified template.
Source image data to process.
Template image to search for.
Rectangle in source image to search template for.
Returns array of found template matches. The array is sorted by similarity
of found matches in descending order.
The source image has incorrect pixel format.
Template image is bigger than source image.
Process image looking for matchings with specified template.
Unmanaged source image to process.
Unmanaged template image to search for.
Returns array of found template matches. The array is sorted by similarity
of found matches in descending order.
The source image has incorrect pixel format.
Template image is bigger than source image.
Process image looking for matchings with specified template.
Unmanaged source image to process.
Unmanaged template image to search for.
Rectangle in source image to search template for.
Returns array of found template matches. The array is sorted by similarity
of found matches in descending order.
The source image has incorrect pixel format.
Template image is bigger than search zone.
Interface for custom blobs' filters used for filtering blobs after
blob counting.
The interface should be implemented by classes, which perform
custom blobs' filtering different from default filtering implemented in
. See
for additional information.
Check specified blob and decide if should be kept or not.
Blob to check.
Return if the blob should be kept or
if it should be removed.
Horizontal intensity statistics.
The class provides information about horizontal distribution
of pixel intensities, which may be used to locate objects, their centers, etc.
The class accepts grayscale (8 bpp indexed and 16 bpp) and color (24, 32, 48 and 64 bpp) images.
In the case of 32 and 64 bpp color images, the alpha channel is not processed - statistics is not
gathered for this channel.
Sample usage:
// collect statistics
HorizontalIntensityStatistics his = new HorizontalIntensityStatistics( sourceImage );
// get gray histogram (for grayscale image)
Histogram histogram = his.Gray;
// output some histogram's information
System.Diagnostics.Debug.WriteLine( "Mean = " + histogram.Mean );
System.Diagnostics.Debug.WriteLine( "Min = " + histogram.Min );
System.Diagnostics.Debug.WriteLine( "Max = " + histogram.Max );
Sample grayscale image with its horizontal intensity histogram:
Histogram for red channel.
Histogram for green channel.
Histogram for blue channel.
Histogram for gray channel (intensities).
Value wich specifies if the processed image was color or grayscale.
If the property equals to true, then the
property should be used to retrieve histogram for the processed grayscale image.
Otherwise , and property
should be used to retrieve histogram for particular RGB channel of the processed
color image.
Initializes a new instance of the class.
Source image.
Unsupported pixel format of the source image.
Initializes a new instance of the class.
Source image data.
Unsupported pixel format of the source image.
Initializes a new instance of the class.
Source unmanaged image.
Unsupported pixel format of the source image.
Gather horizontal intensity statistics for specified image.
Source image.
Hough circle.
Represents circle of Hough transform.
Circle center's X coordinate.
Circle center's Y coordinate.
Circle's radius.
Line's absolute intensity.
Line's relative intensity.
Initializes a new instance of the class.
Circle's X coordinate.
Circle's Y coordinate.
Circle's radius.
Circle's absolute intensity.
Circle's relative intensity.
Compare the object with another instance of this class.
Object to compare with.
A signed number indicating the relative values of this instance and value: 1) greater than zero -
this instance is greater than value; 2) zero - this instance is equal to value;
3) greater than zero - this instance is less than value.
The sort order is descending.
Object are compared using their intensity value.
Hough circle transformation.
The class implements Hough circle transformation, which allows to detect
circles of specified radius in an image.
The class accepts binary images for processing, which are represented by 8 bpp grayscale images.
All black pixels (0 pixel's value) are treated as background, but pixels with different value are
treated as circles' pixels.
Sample usage:
HoughCircleTransformation circleTransform = new HoughCircleTransformation( 35 );
// apply Hough circle transform
circleTransform.ProcessImage( sourceImage );
Bitmap houghCirlceImage = circleTransform.ToBitmap( );
// get circles using relative intensity
HoughCircle[] circles = circleTransform.GetCirclesByRelativeIntensity( 0.5 );
foreach ( HoughCircle circle in circles )
{
// ...
}
Initial image:
Hough circle transformation image:
Minimum circle's intensity in Hough map to recognize a circle.
The value sets minimum intensity level for a circle. If a value in Hough
map has lower intensity, then it is not treated as a circle.
Default value is set to 10.
Radius for searching local peak value.
The value determines radius around a map's value, which is analyzed to determine
if the map's value is a local maximum in specified area.
Default value is set to 4. Minimum value is 1. Maximum value is 10.
Maximum found intensity in Hough map.
The property provides maximum found circle's intensity.
Found circles count.
The property provides total number of found circles, which intensity is higher (or equal to),
than the requested minimum intensity.
Initializes a new instance of the class.
Circles' radius to detect.
Process an image building Hough map.
Source image to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source image data to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source unmanaged image to process.
Unsupported pixel format of the source image.
Ñonvert Hough map to bitmap.
Returns 8 bppp grayscale bitmap, which shows Hough map.
Hough transformation was not yet done by calling
ProcessImage() method.
Get specified amount of circles with highest intensity.
Amount of circles to get.
Returns arrary of most intesive circles. If there are no circles detected,
the returned array has zero length.
Get circles with relative intensity higher then specified value.
Minimum relative intesity of circles.
Returns arrary of most intesive circles. If there are no circles detected,
the returned array has zero length.
Hough line.
Represents line of Hough Line transformation using
polar coordinates. See
Wikipedia for information on how to convert polar coordinates to Cartesian coordinates.
Hough Line transformation does not provide
information about lines start and end points, only slope and distance from image's center. Using
only provided information it is not possible to draw the detected line as it exactly appears on
the source image. But it is possible to draw a line through the entire image, which contains the
source line (see sample code below).
HoughLineTransformation lineTransform = new HoughLineTransformation();
// apply Hough line transofrm
lineTransform.ProcessImage(sourceImage);
Bitmap houghLineImage = lineTransform.ToBitmap();
// get lines using relative intensity
HoughLine[] lines = lineTransform.GetLinesByRelativeIntensity(0.5);
foreach (HoughLine line in lines)
{
// get line's radius and theta values
int r = line.Radius;
double t = line.Theta;
// check if line is in lower part of the image
if (r < 0)
{
t += 180;
r = -r;
}
// convert degrees to radians
t = (t / 180) * Math.PI;
// get image centers (all coordinate are measured relative to center)
int w2 = image.Width / 2;
int h2 = image.Height / 2;
double x0 = 0, x1 = 0, y0 = 0, y1 = 0;
if (line.Theta != 0)
{
// non-vertical line
x0 = -w2; // most left point
x1 = w2; // most right point
// calculate corresponding y values
y0 = (-Math.Cos(t) * x0 + r) / Math.Sin(t);
y1 = (-Math.Cos(t) * x1 + r) / Math.Sin(t);
}
else
{
// vertical line
x0 = line.Radius;
x1 = line.Radius;
y0 = h2;
y1 = -h2;
}
// draw line on the image
Drawing.Line(sourceData,
new IntPoint((int)x0 + w2, h2 - (int)y0),
new IntPoint((int)x1 + w2, h2 - (int)y1),
Color.Red);
}
To clarify meaning of and values
of detected Hough lines, let's take a look at the below sample image and
corresponding values of radius and theta for the lines on the image:
Detected radius and theta values (color in corresponding colors):
- Theta = 90, R = 125, I = 249;
- Theta = 0, R = -170, I = 187 (converts to Theta = 180, R = 170);
- Theta = 90, R = -58, I = 163 (converts to Theta = 270, R = 58);
- Theta = 101, R = -101, I = 130 (converts to Theta = 281, R = 101);
- Theta = 0, R = 43, I = 112;
- Theta = 45, R = 127, I = 82.
Line's slope - angle between polar axis and line's radius (normal going
from pole to the line). Measured in degrees, [0, 180).
Line's distance from image center, (−∞, +∞).
Negative line's radius means, that the line resides in lower
part of the polar coordinates system. This means that value
should be increased by 180 degrees and radius should be made positive.
Line's absolute intensity, (0, +∞).
Line's absolute intensity is a measure, which equals
to number of pixels detected on the line. This value is bigger for longer
lines.
The value may not be 100% reliable to measure exact number of pixels
on the line. Although these value correlate a lot (which means they are very close
in most cases), the intensity value may slightly vary.
Line's relative intensity, (0, 1].
Line's relative intensity is relation of line's
value to maximum found intensity. For the longest line (line with highest intesity) the
relative intensity is set to 1. If line's relative is set 0.5, for example, this means
its intensity is half of maximum found intensity.
Initializes a new instance of the class.
Line's slope.
Line's distance from image center.
Line's absolute intensity.
Line's relative intensity.
Compare the object with another instance of this class.
Object to compare with.
A signed number indicating the relative values of this instance and value: 1) greater than zero -
this instance is greater than value; 2) zero - this instance is equal to value;
3) greater than zero - this instance is less than value.
The sort order is descending.
Object are compared using their intensity value.
Returns a that represents this instance.
Draws the line to a given image.
The image where this Hough line should be drawn to.
The color to be used when drawing the line.
Hough line transformation.
The class implements Hough line transformation, which allows to detect
straight lines in an image. Lines, which are found by the class, are provided in
polar coordinates system -
lines' distances from image's center and lines' slopes are provided.
The pole of polar coordinates system is put into processing image's center and the polar
axis is directed to the right from the pole. Lines' slope is measured in degrees and
is actually represented by angle between polar axis and line's radius (normal going
from pole to the line), which is measured in counter-clockwise direction.
Found lines may have negative radius.
This means, that the line resides in lower part of the polar coordinates system
and its value should be increased by 180 degrees and
radius should be made positive.
The class accepts binary images for processing, which are represented by 8 bpp grayscale images.
All black pixels (0 pixel's value) are treated as background, but pixels with different value are
treated as lines' pixels.
See also documentation to class for additional information
about Hough Lines.
The following example shows how to apply the Hough Line Transform. The example
will apply it to the "sudoku.png" test image from OpenCV, as shown below:
Input image after applying the filter sequence:
Output image after the Hough transform:
Hough lines drawn over the input image:
Steps per degree.
The value defines quality of Hough line transformation and its ability to detect
lines' slope precisely.
Default value is set to 1. Minimum value is 1. Maximum value is 10.
Minimum line's intensity in Hough map to recognize a line.
The value sets minimum intensity level for a line. If a value in Hough
map has lower intensity, then it is not treated as a line.
Default value is set to 10.
Radius for searching local peak value.
The value determines radius around a map's value, which is analyzed to determine
if the map's value is a local maximum in specified area.
Default value is set to 4. Minimum value is 1. Maximum value is 10.
Maximum found intensity in Hough map.
The property provides maximum found line's intensity.
Found lines count.
The property provides total number of found lines, which intensity is higher (or equal to),
than the requested minimum intensity.
Initializes a new instance of the class.
Process an image building Hough map.
Source image to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source image to process.
Image's rectangle to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source image data to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source image data to process.
Image's rectangle to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source unmanaged image to process.
Unsupported pixel format of the source image.
Process an image building Hough map.
Source unmanaged image to process.
Image's rectangle to process.
Unsupported pixel format of the source image.
Convert Hough map to bitmap.
Returns 8 bppp grayscale bitmap, which shows Hough map.
Hough transformation was not yet done by calling
ProcessImage() method.
Get specified amount of lines with highest intensity.
Amount of lines to get.
Returns array of most intesive lines. If there are no lines detected,
the returned array has zero length.
Get lines with relative intensity higher then specified value.
Minimum relative intesity of lines.
Returns array of lines. If there are no lines detected,
the returned array has zero length.
Block matching interface.
The interface specifies set of methods, which should be implemented by different
block matching algorithms.
Block matching algorithms work with two images - source and search image - and
a set of reference points. For each provided reference point, the algorithm takes
a block from source image (reference point is a coordinate of block's center) and finds
the best match for it in search image providing its coordinate (search is done within
search window of specified size). In other words, block matching algorithm tries to
find new coordinates in search image of specified reference points in source image.
Process images matching blocks between them.
Source image with reference points.
List of reference points to be matched.
Image in which the reference points will be looked for.
Returns list of found block matches.
Process images matching blocks between them.
Source image with reference points.
List of reference points to be matched.
Image in which the reference points will be looked for.
Returns list of found block matches.
Process images matching blocks between them.
Source unmanaged image with reference points.
List of reference points to be matched.
Unmanaged image in which the reference points will be looked for.
Returns list of found block matches.
Corners detector's interface.
The interface specifies set of methods, which should be implemented by different
corners detection algorithms.
Gets the list of image pixel formats that are supported by
this extractor. The extractor will check whether the pixel
format of any provided images are in this list to determine
whether the image can be processed or not.
Process image looking for corners.
Source image to process.
Returns list of found corners (X-Y coordinates).
Process image looking for corners.
Source image data to process.
Returns list of found corners (X-Y coordinates).
Process image looking for corners.
Unmanaged source image to process.
Returns list of found corners (X-Y coordinates).
Core image relatad methods.
All methods of this class are static and represent general routines
used by different image processing classes.
Check if specified 8 bpp image is grayscale.
Image to check.
Returns true if the image is grayscale or false otherwise.
The methods checks if the image is a grayscale image of 256 gradients.
The method first examines if the image's pixel format is
Format8bppIndexed
and then it examines its palette to check if the image is grayscale or not.
Check if specified 8 bpp image is contains color-indexed pixels instead of intensity values.
Image to check.
Returns true if the image is color-indexed or false otherwise.
Create and initialize new 8 bpp grayscale image.
Image width.
Image height.
Returns the created grayscale image.
The method creates new 8 bpp grayscale image and initializes its palette.
Grayscale image is represented as
Format8bppIndexed
image with palette initialized to 256 gradients of gray color.
Set pallete of the 8 bpp indexed image to grayscale.
Image to initialize.
The method initializes palette of
Format8bppIndexed
image with 256 gradients of gray color.
Provided image is not 8 bpp indexed image.
Clone image.
Source image.
Pixel format of result image.
Returns clone of the source image with specified pixel format.
The original Bitmap.Clone()
does not produce the desired result - it does not create a clone with specified pixel format.
More of it, the original method does not create an actual clone - it does not create a copy
of the image. That is why this method was implemented to provide the functionality.
Clone image.
Source image as an array of bytes.
Returns clone of the source image with specified pixel format.
The original Bitmap.Clone()
does not produce the desired result - it does not create a clone with specified pixel format.
More of it, the original method does not create an actual clone - it does not create a copy
of the image. That is why this method was implemented to provide the functionality.
Clone image.
Source image.
Return clone of the source image.
The original Bitmap.Clone()
does not produce the desired result - it does not create an actual clone (it does not create a copy
of the image). That is why this method was implemented to provide the functionality.
Converts a 8-bpp color image into a 8-bpp grayscale image, setting its color
palette to grayscale and replacing palette indices with their grayscale values.
The bitmap to be converted.
Clone image.
Source image data.
Clones image from source image data. The message does not clone pallete in the
case if the source image has indexed pixel format.
Format an image.
Source image to format.
Formats the image to one of the formats, which are supported
by the AForge.Imaging library. The image is left untouched in the
case if it is already of
Format24bppRgb or
Format32bppRgb or
Format32bppArgb or
Format48bppRgb or
Format64bppArgb
format or it is grayscale, otherwise the image
is converted to Format24bppRgb
format.
The method is deprecated and method should
be used instead with specifying desired pixel format.
Load bitmap from file.
File name to load bitmap from.
Returns loaded bitmap.
The method is provided as an alternative of
method to solve the issues of locked file. The standard .NET's method locks the source file until
image's object is disposed, so the file can not be deleted or overwritten. This method workarounds the issue and
does not lock the source file.
Convert bitmap with 16 bits per plane to a bitmap with 8 bits per plane.
Source image to convert.
Returns new image which is a copy of the source image but with 8 bits per plane.
The routine does the next pixel format conversions:
- Format16bppGrayScale to
Format8bppIndexed with grayscale palette;
- Format48bppRgb to
Format24bppRgb;
- Format64bppArgb to
Format32bppArgb;
- Format64bppPArgb to
Format32bppPArgb.
Invalid pixel format of the source image.
Load bitmap from URL.
URL to load bitmap from.
Returns loaded bitmap.
Load bitmap from URL.
URL to load bitmap from.
The local directory where the file should be stored.
Returns loaded bitmap.
Convert bitmap with 8 bits per plane to a bitmap with 16 bits per plane.
Source image to convert.
Returns new image which is a copy of the source image but with 16 bits per plane.
The routine does the next pixel format conversions:
- Format8bppIndexed (grayscale palette assumed) to
Format16bppGrayScale;
- Format24bppRgb to
Format48bppRgb;
- Format32bppArgb to
Format64bppArgb;
- Format32bppPArgb to
Format64bppPArgb.
Invalid pixel format of the source image.
Gets the color depth used in an image, in number of bytes per pixel.
The image.
Gets the color depth used in an image, in number of bits per pixel.
The image.
Gets the color depth used in an image, in number of bytes per pixel.
The image.
Gets the color depth used in an image, in number of bits per pixel.
The image.
Gather statistics about image in RGB color space.
The class is used to accumulate statistical values about images,
like histogram, mean, standard deviation, etc. for each color channel in RGB color
space.
The class accepts 8 bpp grayscale and 24/32 bpp color images for processing.
Sample usage:
// gather statistics
ImageStatistics stat = new ImageStatistics( image );
// get red channel's histogram
Histogram red = stat.Red;
// check mean value of red channel
if ( red.Mean > 128 )
{
// do further processing
}
Histogram of red channel.
The property is valid only for color images
(see property).
Histogram of green channel.
The property is valid only for color images
(see property).
Histogram of blue channel.
The property is valid only for color images
(see property).
Histogram of gray channel.
The property is valid only for grayscale images
(see property).
Histogram of red channel excluding black pixels.
The property keeps statistics about red channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
The property is valid only for color images
(see property).
Histogram of green channel excluding black pixels.
The property keeps statistics about green channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
The property is valid only for color images
(see property).
Histogram of blue channel excluding black pixels
The property keeps statistics about blue channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
The property is valid only for color images
(see property).
Histogram of gray channel channel excluding black pixels.
The property keeps statistics about gray channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
The property is valid only for grayscale images
(see property).
Total pixels count in the processed image.
Total pixels count in the processed image excluding black pixels.
Value wich specifies if the processed image was color or grayscale.
If the value is set to then
property should be used to get statistics information about image. Otherwise
, and properties should be used
for color images.
Initializes a new instance of the class.
Image to gather statistics about.
Source pixel format is not supported.
Initializes a new instance of the class.
Image to gather statistics about.
Mask image which specifies areas to collect statistics for.
The mask image must be a grayscale/binary (8bpp) image of the same size as the
specified source image, where black pixels (value 0) correspond to areas which should be excluded
from processing. So statistics is calculated only for pixels, which are none black in the mask image.
Source pixel format is not supported.
Mask image must be 8 bpp grayscale image.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Image to gather statistics about.
Mask array which specifies areas to collect statistics for.
The mask array must be of the same size as the specified source image, where 0 values
correspond to areas which should be excluded from processing. So statistics is calculated only for pixels,
which have none zero corresponding value in the mask.
Source pixel format is not supported.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Unmanaged image to gather statistics about.
Source pixel format is not supported.
Initializes a new instance of the class.
Image to gather statistics about.
Mask image which specifies areas to collect statistics for.
The mask image must be a grayscale/binary (8bpp) image of the same size as the
specified source image, where black pixels (value 0) correspond to areas which should be excluded
from processing. So statistics is calculated only for pixels, which are none black in the mask image.
Source pixel format is not supported.
Mask image must be 8 bpp grayscale image.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Image to gather statistics about.
Mask array which specifies areas to collect statistics for.
The mask array must be of the same size as the specified source image, where 0 values
correspond to areas which should be excluded from processing. So statistics is calculated only for pixels,
which have none zero corresponding value in the mask.
Source pixel format is not supported.
Mask must have the same size as the source image to get statistics for.
Gather statistics about image in HSL color space.
The class is used to accumulate statistical values about images,
like histogram, mean, standard deviation, etc. for each HSL color channel.
The class accepts 24 and 32 bpp color images for processing.
Sample usage:
// gather statistics
ImageStatisticsHSL stat = new ImageStatisticsHSL( image );
// get saturation channel's histogram
ContinuousHistogram saturation = stat.Saturation;
// check mean value of saturation channel
if ( saturation.Mean > 0.5 )
{
// do further processing
}
Histogram of saturation channel.
Histogram of luminance channel.
Histogram of saturation channel excluding black pixels.
The property keeps statistics about saturation channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
Histogram of luminance channel excluding black pixels.
The property keeps statistics about luminance channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
Total pixels count in the processed image.
Total pixels count in the processed image excluding black pixels.
Initializes a new instance of the class.
Image to gather statistics about.
Source pixel format is not supported.
Initializes a new instance of the class.
Image to gather statistics about.
Mask image which specifies areas to collect statistics for.
The mask image must be a grayscale/binary (8bpp) image of the same size as the
specified source image, where black pixels (value 0) correspond to areas which should be excluded
from processing. So statistics is calculated only for pixels, which are none black in the mask image.
Source pixel format is not supported.
Mask image must be 8 bpp grayscale image.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Image to gather statistics about.
Mask array which specifies areas to collect statistics for.
The mask array must be of the same size as the specified source image, where 0 values
correspond to areas which should be excluded from processing. So statistics is calculated only for pixels,
which have none zero corresponding value in the mask.
Source pixel format is not supported.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Unmanaged image to gather statistics about.
Source pixel format is not supported.
Initializes a new instance of the class.
Image to gather statistics about.
Mask image which specifies areas to collect statistics for.
The mask image must be a grayscale/binary (8bpp) image of the same size as the
specified source image, where black pixels (value 0) correspond to areas which should be excluded
from processing. So statistics is calculated only for pixels, which are none black in the mask image.
Source pixel format is not supported.
Mask image must be 8 bpp grayscale image.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Image to gather statistics about.
Mask array which specifies areas to collect statistics for.
The mask array must be of the same size as the specified source image, where 0 values
correspond to areas which should be excluded from processing. So statistics is calculated only for pixels,
which have none zero corresponding value in the mask.
Source pixel format is not supported.
Mask must have the same size as the source image to get statistics for.
Gather statistics about image in YCbCr color space.
The class is used to accumulate statistical values about images,
like histogram, mean, standard deviation, etc. for each YCbCr color channel.
The class accepts 24 and 32 bpp color images for processing.
Sample usage:
// gather statistics
ImageStatisticsYCbCr stat = new ImageStatisticsYCbCr( image );
// get Y channel's histogram
ContinuousHistogram y = stat.Y;
// check mean value of Y channel
if ( y.Mean > 0.5 )
{
// do further processing
}
Histogram of Y channel.
Histogram of Cb channel.
Histogram of Cr channel.
Histogram of Y channel excluding black pixels.
The property keeps statistics about Y channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
Histogram of Cb channel excluding black pixels
The property keeps statistics about Cb channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
Histogram of Cr channel excluding black pixels
The property keeps statistics about Cr channel, which
excludes all black pixels, what affects mean, standard deviation, etc.
Total pixels count in the processed image.
Total pixels count in the processed image excluding black pixels.
Initializes a new instance of the class.
Image to gather statistics about.
Source pixel format is not supported.
Initializes a new instance of the class.
Image to gather statistics about.
Mask image which specifies areas to collect statistics for.
The mask image must be a grayscale/binary (8bpp) image of the same size as the
specified source image, where black pixels (value 0) correspond to areas which should be excluded
from processing. So statistics is calculated only for pixels, which are none black in the mask image.
Source pixel format is not supported.
Mask image must be 8 bpp grayscale image.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Image to gather statistics about.
Mask array which specifies areas to collect statistics for.
The mask array must be of the same size as the specified source image, where 0 values
correspond to areas which should be excluded from processing. So statistics is calculated only for pixels,
which have none zero corresponding value in the mask.
Source pixel format is not supported.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Unmanaged image to gather statistics about.
Source pixel format is not supported.
Initializes a new instance of the class.
Image to gather statistics about.
Mask image which specifies areas to collect statistics for.
The mask image must be a grayscale/binary (8bpp) image of the same size as the
specified source image, where black pixels (value 0) correspond to areas which should be excluded
from processing. So statistics is calculated only for pixels, which are none black in the mask image.
Source pixel format is not supported.
Mask image must be 8 bpp grayscale image.
Mask must have the same size as the source image to get statistics for.
Initializes a new instance of the class.
Image to gather statistics about.
Mask array which specifies areas to collect statistics for.
The mask array must be of the same size as the specified source image, where 0 values
correspond to areas which should be excluded from processing. So statistics is calculated only for pixels,
which have none zero corresponding value in the mask.
Source pixel format is not supported.
Mask must have the same size as the source image to get statistics for.
Integral image.
This class implements integral image concept, which is described by
Viola and Jones in: P. Viola and M. J. Jones, "Robust real-time face detection",
Int. Journal of Computer Vision 57(2), pp. 137–154, 2004.
"An integral image I of an input image G is defined as the image in which the
intensity at a pixel position is equal to the sum of the intensities of all the pixels
above and to the left of that position in the original image."
The intensity at position (x, y) can be written as:
x y
I(x,y) = SUM( SUM( G(i,j) ) )
i=0 j=0
The class uses 32-bit integers to represent integral image.
The class processes only grayscale (8 bpp indexed) images.
This class contains two versions of each method: safe and unsafe. Safe methods do
checks of provided coordinates and ensure that these coordinates belong to the image, what makes
these methods slower. Unsafe methods do not do coordinates' checks and rely that these
coordinates belong to the image, what makes these methods faster.
This class implements the simplest upright representation of an integral image. For an integral
image that can represent squared integral images as well as tilted images at the same time, please
refer to .
Sample usage:
// create integral image
IntegralImage im = IntegralImage.FromBitmap(image);
// get pixels' mean value in the specified rectangle
float mean = im.GetRectangleMean(10, 10, 20, 30)
Integral image's array.
See remarks to property.
Integral image's array.
See remarks to property.
Width of the source image the integral image was constructed for.
Height of the source image the integral image was constructed for.
Provides access to internal array keeping integral image data.
The array should be accessed by [y][x] indexing.
The array's size is [+1, +1]. The first
row and column are filled with zeros, what is done for more efficient calculation of
rectangles' sums.
Provides access to internal array keeping integral image data.
The array should be accessed by [y, x] indexing.
The array's size is [+1, +1]. The first
row and column are filled with zeros, what is done for more efficient calculation of
rectangles' sums.
Initializes a new instance of the class.
Image width.
Image height.
The constructor is protected, what makes it impossible to instantiate this
class directly. To create an instance of this class or
method should be used.
Construct integral image from source grayscale image.
Source grayscale image.
Returns integral image.
The source image has incorrect pixel format.
Construct integral image from source grayscale image.
Source image data.
Returns integral image.
The source image has incorrect pixel format.
Construct integral image from source grayscale image.
Source unmanaged image.
Returns integral image.
The source image has incorrect pixel format.
Calculate sum of pixels in the specified rectangle.
X coordinate of left-top rectangle's corner.
Y coordinate of left-top rectangle's corner.
X coordinate of right-bottom rectangle's corner.
Y coordinate of right-bottom rectangle's corner.
Returns sum of pixels in the specified rectangle.
Both specified points are included into the calculation rectangle.
Calculate horizontal (X) haar wavelet at the specified point.
X coordinate of the point to calculate wavelet at.
Y coordinate of the point to calculate wavelet at.
Wavelet size to calculate.
Returns value of the horizontal wavelet at the specified point.
The method calculates horizontal wavelet, which is a difference
of two horizontally adjacent boxes' sums, i.e. A-B. A is the sum of rectangle with coordinates
(x, y-radius, x+radius-1, y+radius-1). B is the sum of rectangle with coordinates
(x-radius, y-radius, x-1, y+radius-1).
Calculate vertical (Y) haar wavelet at the specified point.
X coordinate of the point to calculate wavelet at.
Y coordinate of the point to calculate wavelet at.
Wavelet size to calculate.
Returns value of the vertical wavelet at the specified point.
The method calculates vertical wavelet, which is a difference
of two vertical adjacent boxes' sums, i.e. A-B. A is the sum of rectangle with coordinates
(x-radius, y, x+radius-1, y+radius-1). B is the sum of rectangle with coordinates
(x-radius, y-radius, x+radius-1, y-1).
Calculate sum of pixels in the specified rectangle without checking it's coordinates.
X coordinate of left-top rectangle's corner.
Y coordinate of left-top rectangle's corner.
X coordinate of right-bottom rectangle's corner.
Y coordinate of right-bottom rectangle's corner.
Returns sum of pixels in the specified rectangle.
Both specified points are included into the calculation rectangle.
Calculate sum of pixels in the specified rectangle.
X coordinate of central point of the rectangle.
Y coordinate of central point of the rectangle.
Radius of the rectangle.
Returns sum of pixels in the specified rectangle.
The method calculates sum of pixels in square rectangle with
odd width and height. In the case if it is required to calculate sum of
3x3 rectangle, then it is required to specify its center and radius equal to 1.
Calculate sum of pixels in the specified rectangle without checking it's coordinates.
X coordinate of central point of the rectangle.
Y coordinate of central point of the rectangle.
Radius of the rectangle.
Returns sum of pixels in the specified rectangle.
The method calculates sum of pixels in square rectangle with
odd width and height. In the case if it is required to calculate sum of
3x3 rectangle, then it is required to specify its center and radius equal to 1.
Calculate mean value of pixels in the specified rectangle.
X coordinate of left-top rectangle's corner.
Y coordinate of left-top rectangle's corner.
X coordinate of right-bottom rectangle's corner.
Y coordinate of right-bottom rectangle's corner.
Returns mean value of pixels in the specified rectangle.
Both specified points are included into the calculation rectangle.
Calculate mean value of pixels in the specified rectangle without checking it's coordinates.
X coordinate of left-top rectangle's corner.
Y coordinate of left-top rectangle's corner.
X coordinate of right-bottom rectangle's corner.
Y coordinate of right-bottom rectangle's corner.
Returns mean value of pixels in the specified rectangle.
Both specified points are included into the calculation rectangle.
Calculate mean value of pixels in the specified rectangle.
X coordinate of central point of the rectangle.
Y coordinate of central point of the rectangle.
Radius of the rectangle.
Returns mean value of pixels in the specified rectangle.
The method calculates mean value of pixels in square rectangle with
odd width and height. In the case if it is required to calculate mean value of
3x3 rectangle, then it is required to specify its center and radius equal to 1.
Calculate mean value of pixels in the specified rectangle without checking it's coordinates.
X coordinate of central point of the rectangle.
Y coordinate of central point of the rectangle.
Radius of the rectangle.
Returns mean value of pixels in the specified rectangle.
The method calculates mean value of pixels in square rectangle with
odd width and height. In the case if it is required to calculate mean value of
3x3 rectangle, then it is required to specify its center and radius equal to 1.
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Interpolation routines.
Bicubic kernel.
X value.
Bicubic cooefficient.
The function implements bicubic kernel W(x) as described on
Wikipedia
(coefficient a is set to -0.5).
Template matching algorithm's interface.
The interface specifies set of methods, which should be implemented by different
template matching algorithms - algorithms, which search for the given template in specified
image.
Process image looking for matchings with specified template.
Source image to process.
Template image to search for.
Rectangle in source image to search template for.
Returns array of found matchings.
Process image looking for matchings with specified template.
Source image data to process.
Template image to search for.
Rectangle in source image to search template for.
Returns array of found matchings.
Process image looking for matchings with specified template.
Unmanaged source image to process.
Unmanaged template image to search for.
Rectangle in source image to search template for.
Returns array of found matchings.
Internal memory manager used by image processing routines.
The memory manager supports memory allocation/deallocation
caching. Caching means that memory blocks may be not freed on request, but
kept for later reuse.
Maximum amount of memory blocks to keep in cache.
The value specifies the amount of memory blocks, which could be
cached by the memory manager.
Default value is set to 3. Maximum value is 10.
Current amount of memory blocks in cache.
Amount of busy memory blocks in cache (which were not freed yet by user).
Amount of free memory blocks in cache (which are not busy by users).
Amount of cached memory in bytes.
Maximum memory block's size in bytes, which could be cached.
Memory blocks, which size is greater than this value, are not cached.
Minimum memory block's size in bytes, which could be cached.
Memory blocks, which size is less than this value, are not cached.
Allocate unmanaged memory.
Memory size to allocate.
Return's pointer to the allocated memory buffer.
The method allocates requested amount of memory and returns pointer to it. It may avoid allocation
in the case some caching scheme is uses and there is already enough allocated memory available.
There is insufficient memory to satisfy the request.
Free unmanaged memory.
Pointer to memory buffer to free.
This method may skip actual deallocation of memory and keep it for future requests,
if some caching scheme is used.
Force freeing unused memory.
Frees and removes from cache memory blocks, which are not used by users.
Returns number of freed memory blocks.
Moravec corners detector.
The class implements Moravec corners detector. For information about algorithm's
details its description
should be studied.
Due to limitations of Moravec corners detector (anisotropic response, etc.) its usage is limited
to certain cases only.
The class processes only grayscale 8 bpp and color 24/32 bpp images.
Sample usage:
// create corner detector's instance
MoravecCornersDetector mcd = new MoravecCornersDetector( );
// process image searching for corners
List<IntPoint> corners = scd.ProcessImage( image );
// process points
foreach ( IntPoint corner in corners )
{
// ...
}
Window size used to determine if point is interesting, [3, 15].
The value specifies window size, which is used for initial searching of
corners candidates and then for searching local maximums.
Default value is set to 3.
Setting value is not odd.
Threshold value, which is used to filter out uninteresting points.
The value is used to filter uninteresting points - points which have value below
specified threshold value are treated as not corners candidates. Increasing this value decreases
the amount of detected point.
Default value is set to 500.
Initializes a new instance of the class.
Initializes a new instance of the class.
Threshold value, which is used to filter out uninteresting points.
Initializes a new instance of the class.
Threshold value, which is used to filter out uninteresting points.
Window size used to determine if point is interesting.
This method should be implemented by inheriting classes to implement the
actual corners detection, transforming the input image into a list of points.
Creates a new object that is a copy of the current instance.
Searching of quadrilateral/triangle corners.
The class searches for quadrilateral's/triangle's corners on the specified image.
It first collects edge points of the object and then uses
to find corners
the quadrilateral/triangle.
The class treats all black pixels as background (none-object) and
all none-black pixels as object.
The class processes grayscale 8 bpp and color 24/32 bpp images.
Sample usage:
// get corners of the quadrilateral
QuadrilateralFinder qf = new QuadrilateralFinder( );
List<IntPoint> corners = qf.ProcessImage( image );
// lock image to draw on it with AForge.NET's methods
// (or draw directly on image without locking if it is unmanaged image)
BitmapData data = image.LockBits( new Rectangle( 0, 0, image.Width, image.Height ),
ImageLockMode.ReadWrite, image.PixelFormat );
Drawing.Polygon( data, corners, Color.Red );
for ( int i = 0; i < corners.Count; i++ )
{
Drawing.FillRectangle( data,
new Rectangle( corners[i].X - 2, corners[i].Y - 2, 5, 5 ),
Color.FromArgb( i * 32 + 127 + 32, i * 64, i * 64 ) );
}
image.UnlockBits( data );
Source image:
Result image:
Find corners of quadrilateral/triangular area in the specified image.
Source image to search quadrilateral for.
Returns a list of points, which are corners of the quadrilateral/triangular area found
in the specified image. The first point in the list is the point with lowest
X coordinate (and with lowest Y if there are several points with the same X value).
Points are in clockwise order (screen coordinates system).
Unsupported pixel format of the source image.
Find corners of quadrilateral/triangular area in the specified image.
Source image data to search quadrilateral for.
Returns a list of points, which are corners of the quadrilateral/triangular area found
in the specified image. The first point in the list is the point with lowest
X coordinate (and with lowest Y if there are several points with the same X value).
Points are in clockwise order (screen coordinates system).
Unsupported pixel format of the source image.
Find corners of quadrilateral/triangular area in the specified image.
Source image to search quadrilateral for.
Returns a list of points, which are corners of the quadrilateral/triangular area found
in the specified image. The first point in the list is the point with lowest
X coordinate (and with lowest Y if there are several points with the same X value).
Points are in clockwise order (screen coordinates system).
Unsupported pixel format of the source image.
Blob counter based on recursion.
The class counts and extracts stand alone objects in
images using recursive version of connected components labeling
algorithm.
The algorithm treats all pixels with values less or equal to
as background, but pixels with higher values are treated as objects' pixels.
Since this algorithm is based on recursion, it is
required to be careful with its application to big images with big blobs,
because in this case recursion will require big stack size and may lead
to stack overflow. The recursive version may be applied (and may be even
faster than ) to an image with small blobs -
"star sky" image (or small cells, for example, etc).
For blobs' searching the class supports 8 bpp indexed grayscale images and
24/32 bpp color images.
See documentation about for information about which
pixel formats are supported for extraction of blobs.
Sample usage:
// create an instance of blob counter algorithm
var bc = new RecursiveBlobCounter();
// process binary image
bc.ProcessImage(image);
// process blobs
foreach (Rectangle rect in bc.GetObjectsRectangles())
{
// ...
}
Background threshold's value.
The property sets threshold value for distinguishing between background
pixel and objects' pixels. All pixel with values less or equal to this property are
treated as background, but pixels with higher values are treated as objects' pixels.
In the case of colour images a pixel is treated as objects' pixel if any of its
RGB values are higher than corresponding values of this threshold.
For processing grayscale image, set the property with all RGB components eqaul.
Default value is set to (0, 0, 0) - black colour.
Initializes a new instance of the class.
Creates new instance of the class with
an empty objects map. Before using methods, which provide information about blobs
or extract them, the ,
or
method should be called to collect objects map.
Initializes a new instance of the class.
Image to look for objects in.
Initializes a new instance of the class.
Image data to look for objects in.
Initializes a new instance of the class.
Unmanaged image to look for objects in.
Actual objects map building.
Unmanaged image to process.
The method supports 8 bpp indexed grayscale images and 24/32 bpp color images.
Unsupported pixel format of the source image.
Susan corners detector.
The class implements Susan corners detector, which is described by
S.M. Smith in: S.M. Smith, "SUSAN - a new approach to low level image processing",
Internal Technical Report TR95SMS1, Defense Research Agency, Chobham Lane, Chertsey,
Surrey, UK, 1995.
Some implementation notes:
- Analyzing each pixel and searching for its USAN area, the 7x7 mask is used,
which is comprised of 37 pixels. The mask has circle shape:
xxx
xxxxx
xxxxxxx
xxxxxxx
xxxxxxx
xxxxx
xxx
- In the case if USAN's center of mass has the same coordinates as nucleus
(central point), the pixel is not a corner.
- For noise suppression the 5x5 square window is used.
The class processes only grayscale 8 bpp and color 24/32 bpp images.
In the case of color image, it is converted to grayscale internally using
filter.
Sample usage:
// create corners detector's instance
SusanCornersDetector scd = new SusanCornersDetector( );
// process image searching for corners
List<IntPoint> corners = scd.ProcessImage( image );
// process points
foreach ( IntPoint corner in corners )
{
// ...
}
Brightness difference threshold.
The brightness difference threshold controls the amount
of pixels, which become part of USAN area. If difference between central
pixel (nucleus) and surrounding pixel is not higher than difference threshold,
then that pixel becomes part of USAN.
Increasing this value decreases the amount of detected corners.
Default value is set to 25.
Geometrical threshold.
The geometrical threshold sets the maximum number of pixels
in USAN area around corner. If potential corner has USAN with more pixels, than
it is not a corner.
Decreasing this value decreases the amount of detected corners - only sharp corners
are detected. Increasing this value increases the amount of detected corners, but
also increases amount of flat corners, which may be not corners at all.
Default value is set to 18, which is half of maximum amount of pixels in USAN.
Initializes a new instance of the class.
Initializes a new instance of the class.
Brightness difference threshold.
Geometrical threshold.
This method should be implemented by inheriting classes to implement the
actual corners detection, transforming the input image into a list of points.
Creates a new object that is a copy of the current instance.
Template match class keeps information about found template match. The class is
used with template matching algorithms implementing
interface.
Rectangle of the matching area.
Similarity between template and found matching, [0..1].
Initializes a new instance of the class.
Rectangle of the matching area.
Similarity between template and found matching, [0..1].
Base class for image feature extractors that implement the interface.
The type of the descriptor vector for the feature (e.g. double[]).
Obsolete. Please use the method instead.
Obsolete. Please use the method instead.
Obsolete. Please use the method instead.
Base class for image feature extractors that implement the interface.
The type of the extracted features (e.g. , ]).
Gets the list of image pixel formats that are supported by
this extractor. The extractor will check whether the pixel
format of any provided images are in this list to determine
whether the image can be processed or not.
Returns -1.
Gets the dimensionality of the features generated by this extractor.
Initializes a new instance of the class.
Obsolete. Please use the method instead.
Obsolete. Please use the method instead.
Obsolete. Please use the method instead.
Applies the transformation to an input, producing an associated output.
The input data to which the transformation should be applied.
The output generated by applying this transformation to the given input.
Applies the transformation to an input, producing an associated output.
The input data to which the transformation should be applied.
The output generated by applying this transformation to the given input.
Applies the transformation to a set of input vectors,
producing an associated set of output vectors.
The input data to which
the transformation should be applied.
The location to where to store the
result of this transformation.
The output generated by applying this
transformation to the given input.
Applies the transformation to an input, producing an associated output.
The input data to which the transformation should be applied.
The output generated by applying this transformation to the given input.
Applies the transformation to an input, producing an associated output.
The input data to which the transformation should be applied.
The output generated by applying this transformation to the given input.
Applies the transformation to a set of input vectors,
producing an associated set of output vectors.
The input data to which
the transformation should be applied.
The location to where to store the
result of this transformation.
The output generated by applying this
transformation to the given input.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Creates a new object that is a copy of the current instance.
Creates a new object that is a copy of the current instance.
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged resources; false to release only unmanaged resources.
Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources.
Feature dictionary. Associates a set of Haralick features to a given degree
used to compute the originating GLCM.
Initializes a new instance of the class.
Combines features generated from different
GLCMs computed using different angulations
by concatenating them into a single vector.
The number of Haralick's original features to compute.
A single vector containing all values computed from
the different s.
If there are d degrees in this
collection, and n given to compute, the
generated vector will have size d * n. All features from different
degrees will be concatenated into this single result vector.
Combines features generated from different
GLCMs computed using different angulations
by averaging them into a single vector.
The number of Haralick's original features to compute.
A single vector containing the average of the values
computed from the different s.
If there are d degrees in this
collection, and n given to compute, the
generated vector will have size n. All features from different
degrees will be averaged into this single result vector.
Combines features generated from different
GLCMs computed using different angulations
by averaging them into a single vector.
The number of Haralick's original features to compute.
A single vector containing the average of the values
computed from the different s.
If there are d degrees in this
collection, and n given to compute, the
generated vector will have size 2*n*d. Each even index will have
the average of a given feature, and the subsequent odd index will contain
the range of this feature.
Combines features generated from different
GLCMs computed using different angulations
by averaging them into a single vector, normalizing them to be between -1 and 1.
The number of Haralick's original features to compute.
A single vector containing the averaged and normalized values
computed from the different s.
If there are d degrees in this
collection, and n given to compute, the
generated vector will have size n. All features will be averaged, and
the mean will be scaled to be in a [-1,1] interval.
Base class for texture generators.
Each texture generator generates a 2-D texture of the specified size and returns
it as two dimensional array of intensities in the range of [0, 1] - texture's values.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of texture's intensities.
Generates new texture of the specified size.
Reset generator.
Resets the generator - resets all internal variables, regenerates
internal random numbers, etc.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of texture's intensities.
Generates new texture of the specified size.
Clouds texture.
The texture generator creates textures with effect of clouds.
The generator is based on the Perlin noise function.
Sample usage:
// create texture generator
CloudsTexture textureGenerator = new CloudsTexture();
// generate new texture
float[,] texture = textureGenerator.Generate(320, 240);
// convert it to image to visualize
Bitmap textureImage = texture.ToBitmap();
Result image:
Initializes a new instance of the class.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of intensities.
Generates new texture of the specified size.
Texture generator interface.
Each texture generator generates a 2-D texture of the specified size and returns
it as two dimensional array of intensities in the range of [0, 1] - texture's values.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of texture's intensities.
Generates new texture of the specified size.
Reset generator.
Resets the generator - resets all internal variables, regenerates
internal random numbers, etc.
Labirinth texture.
The texture generator creates textures with effect of labyrinth.
The generator is based on the Perlin noise function.
Sample usage:
// create texture generator
LabyrinthTexture textureGenerator = new LabyrinthTexture( );
// generate new texture
float[,] texture = textureGenerator.Generate( 320, 240 );
// convert it to image to visualize
Bitmap textureImage = TextureTools.ToBitmap( texture );
Result image:
Initializes a new instance of the class.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of intensities.
Generates new texture of the specified size.
Marble texture.
The texture generator creates textures with effect of marble.
The and properties allow to control the look
of marble texture in X/Y directions.
The generator is based on the Perlin noise function.
Sample usage:
// create texture generator
MarbleTexture textureGenerator = new MarbleTexture( );
// generate new texture
float[,] texture = textureGenerator.Generate( 320, 240 );
// convert it to image to visualize
Bitmap textureImage = TextureTools.ToBitmap( texture );
Result image:
X period value, ≥ 2. Default is 5.
Default value is set to 5.
Y period value, ≥ 2. Default is 10.
Default value is set to 10.
Initializes a new instance of the class.
X period value.
Y period value.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of intensities.
Generates new texture of the specified size.
Textile texture.
The texture generator creates textures with effect of textile.
The generator is based on the Perlin noise function.
Sample usage:
// create texture generator
TextileTexture textureGenerator = new TextileTexture( );
// generate new texture
float[,] texture = textureGenerator.Generate( 320, 240 );
// convert it to image to visualize
Bitmap textureImage = TextureTools.ToBitmap( texture );
Result image:
Initializes a new instance of the class.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of intensities.
Generates new texture of the specified size.
Obsolete. Please use classes from the Accord.Imaging.Converters namespace instead.
Obsolete. Please use the class instead. See remarks for an example.
MatrixToImage i2m = new MatrixToImage();
Bitmap image;
i2m.Convert(texture, out image);
return image;
Obsolete. Please use the class instead. See remarks for an example.
ImageToMatrix i2m = new ImageToMatrix();
float[,] texture;
i2m.Convert(image, out texture);
return texture;
Obsolete. Please use the class instead. See remarks for an example.
ImageToMatrix i2m = new ImageToMatrix();
float[,] texture;
i2m.Convert(image, out texture);
return texture;
Obsolete. Please use the class instead. See remarks for an example.
ImageToMatrix i2m = new ImageToMatrix();
float[,] texture;
i2m.Convert(image, out texture);
return texture;
Wood texture.
The texture generator creates textures with effect of
rings on trunk's shear. The property allows to specify the
desired amount of wood rings.
The generator is based on the Perlin noise function.
Sample usage:
// create texture generator
WoodTexture textureGenerator = new WoodTexture();
// generate new texture
float[,] texture = textureGenerator.Generate(320, 240);
// convert it to image to visualize
Bitmap textureImage = texture.ToBitmap();
Result image:
Wood rings amount, ≥ 3. Default is 12.
The property sets the amount of wood rings, which make effect of
rings on trunk's shear.
Default value is set to 12.
Initializes a new instance of the class.
Initializes a new instance of the class.
Wood rings amount.
Generate texture.
Texture's width.
Texture's height.
Two dimensional array of intensities.
Generates new texture of the specified size.
Image in unmanaged memory.
The class represents wrapper of an image in unmanaged memory. Using this class
it is possible as to allocate new image in unmanaged memory, as to just wrap provided
pointer to unmanaged memory, where an image is stored.
Usage of unmanaged images is mostly beneficial when it is required to apply multiple
image processing routines to a single image. In such scenario usage of .NET managed images
usually leads to worse performance, because each routine needs to lock managed image
before image processing is done and then unlock it after image processing is done. Without
these lock/unlock there is no way to get direct access to managed image's data, which means
there is no way to do fast image processing. So, usage of managed images lead to overhead, which
is caused by locks/unlock. Unmanaged images are represented internally using unmanaged memory
buffer. This means that it is not required to do any locks/unlocks in order to get access to image
data (no overhead).
Sample usage:
// sample 1 - wrapping .NET image into unmanaged without
// making extra copy of image in memory
BitmapData imageData = image.LockBits(
new Rectangle( 0, 0, image.Width, image.Height ),
ImageLockMode.ReadWrite, image.PixelFormat );
try
{
UnmanagedImage unmanagedImage = new UnmanagedImage( imageData ) );
// apply several routines to the unmanaged image
}
finally
{
image.UnlockBits( imageData );
}
// sample 2 - converting .NET image into unmanaged
UnmanagedImage unmanagedImage = UnmanagedImage.FromManagedImage( image );
// apply several routines to the unmanaged image
...
// conver to managed image if it is required to display it at some point of time
Bitmap managedImage = unmanagedImage.ToManagedImage( );
Pointer to image data in unmanaged memory.
Image width in pixels.
Image height in pixels.
Image stride (line size in bytes).
Image pixel format.
Gets the image size, in bytes.
Gets the image size, in pixels.
Gets the number of extra bytes after the image width is over. This can be computed
as - * .
Gets the size of the pixels in this image, in bytes. For
example, a 8-bpp grayscale image would have pixel size 1.
Initializes a new instance of the class.
Pointer to image data in unmanaged memory.
Image width in pixels.
Image height in pixels.
Image stride (line size in bytes).
Image pixel format.
Using this constructor, make sure all specified image attributes are correct
and correspond to unmanaged memory buffer. If some attributes are specified incorrectly,
this may lead to exceptions working with the unmanaged memory.
Initializes a new instance of the class.
Locked bitmap data.
Unlike method, this constructor does not make
copy of managed image. This means that managed image must stay locked for the time of using the instance
of unamanged image.
Destroys the instance of the class.
Dispose the object.
Frees unmanaged resources used by the object. The object becomes unusable
after that.
The method needs to be called only in the case if unmanaged image was allocated
using method. In the case if the class instance
was created using constructor, this method does not free unmanaged memory.
Dispose the object.
Indicates if disposing was initiated manually.
Clone the unmanaged images.
Returns clone of the unmanaged image.
The method does complete cloning of the object.
Copy unmanaged image.
Destination image to copy this image to.
The method copies current unmanaged image to the specified image.
Size and pixel format of the destination image must be exactly the same.
Destination image has different size or pixel format.
Allocate new image in unmanaged memory.
Image width.
Image height.
Image pixel format.
Return image allocated in unmanaged memory.
Allocate new image with specified attributes in unmanaged memory.
The method supports only
Format8bppIndexed,
Format16bppGrayScale,
Format24bppRgb,
Format32bppRgb,
Format32bppArgb,
Format32bppPArgb,
Format48bppRgb,
Format64bppArgb and
Format64bppPArgb pixel formats.
In the case if Format8bppIndexed
format is specified, pallete is not not created for the image (supposed that it is
8 bpp grayscale image).
Unsupported pixel format was specified.
Invalid image size was specified.
Allocate new image in unmanaged memory.
Image width.
Image height.
Image stride.
Image pixel format.
Return image allocated in unmanaged memory.
Allocate new image with specified attributes in unmanaged memory.
The method supports only
Format8bppIndexed,
Format16bppGrayScale,
Format24bppRgb,
Format32bppRgb,
Format32bppArgb,
Format32bppPArgb,
Format48bppRgb,
Format64bppArgb and
Format64bppPArgb pixel formats.
In the case if Format8bppIndexed
format is specified, pallete is not not created for the image (supposed that it is
8 bpp grayscale image).
Unsupported pixel format was specified.
Invalid image size was specified.
Create managed image from the unmanaged.
Returns managed copy of the unmanaged image.
The method creates a managed copy of the unmanaged image with the
same size and pixel format (it calls specifying
for the makeCopy parameter).
Create managed image from the unmanaged.
Make a copy of the unmanaged image or not.
Returns managed copy of the unmanaged image.
If the is set to , then the method
creates a managed copy of the unmanaged image, so the managed image stays valid even when the unmanaged
image gets disposed. However, setting this parameter to creates a managed image which is
just a wrapper around the unmanaged image. So if unmanaged image is disposed, the
managed image becomes no longer valid and accessing it will generate an exception.
The unmanaged image has some invalid properties, which results
in failure of converting it to managed image. This may happen if user used the
constructor specifying some
invalid parameters.
Create unmanaged image from the specified byte array.
Source byte array containing the image's pixels.
The height of the image.
The width of the image.
The of the pixels.
Returns new unmanaged image, which is a copy of source managed image.
The method creates an exact copy of specified managed image, but allocated
in unmanaged memory.
Unsupported pixel format of source image.
Create unmanaged image from the specified managed image.
Source managed image.
Returns new unmanaged image, which is a copy of source managed image.
The method creates an exact copy of specified managed image, but allocated
in unmanaged memory.
Unsupported pixel format of source image.
Create unmanaged image from the specified managed image.
Source locked image data.
Returns new unmanaged image, which is a copy of source managed image.
The method creates an exact copy of specified managed image, but allocated
in unmanaged memory. This means that managed image may be unlocked right after call to this
method.
Unsupported pixel format of source image.
Collect pixel values from the specified list of coordinates.
List of coordinates to collect pixels' value from.
Returns array of pixels' values from the specified coordinates.
The method goes through the specified list of points and for each point retrievs
corresponding pixel's value from the unmanaged image.
For grayscale image the output array has the same length as number of points in the
specified list of points. For color image the output array has triple length, containing pixels'
values in RGB order.
The method does not make any checks for valid coordinates and leaves this up to user.
If specified coordinates are out of image's bounds, the result is not predictable (crash in most cases).
This method is supposed for images with 8 bpp channels only (8 bpp grayscale image and
24/32 bpp color images).
Unsupported pixel format of the source image. Use Collect16bppPixelValues() method for
images with 16 bpp channels.
Collect coordinates of none black pixels in the image.
Returns list of points, which have other than black color.
Collect coordinates of none black pixels within specified rectangle of the image.
Image's rectangle to process.
Returns list of points, which have other than black color.
Set pixels with the specified coordinates to the specified color.
List of points to set color for.
Color to set for the specified points.
For images having 16 bpp per color plane, the method extends the specified color
value to 16 bit by multiplying it by 256.
Set pixel with the specified coordinates to the specified color.
Point's coordiates to set color for.
Color to set for the pixel.
See for more information.
Set pixel with the specified coordinates to the specified color.
X coordinate of the pixel to set.
Y coordinate of the pixel to set.
Color to set for the pixel.
For images having 16 bpp per color plane, the method extends the specified color
value to 16 bit by multiplying it by 256.
For grayscale images this method will calculate intensity value based on the below formula:
0.2125 * Red + 0.7154 * Green + 0.0721 * Blue
Set pixel with the specified coordinates to the specified value.
X coordinate of the pixel to set.
Y coordinate of the pixel to set.
Pixel value to set.
The method sets all color components of the pixel to the specified value.
If it is a grayscale image, then pixel's intensity is set to the specified value.
If it is a color image, then pixel's R/G/B components are set to the same specified value
(if an image has alpha channel, then it is set to maximum value - 255 or 65535).
For images having 16 bpp per color plane, the method extends the specified color
value to 16 bit by multiplying it by 256.
Get color of the pixel with the specified coordinates.
Point's coordiates to get color of.
Return pixel's color at the specified coordinates.
See for more information.
Get color of the pixel with the specified coordinates.
X coordinate of the pixel to get.
Y coordinate of the pixel to get.
Return pixel's color at the specified coordinates.
In the case if the image has 8 bpp grayscale format, the method will return a color with
all R/G/B components set to same value, which is grayscale intensity.
The method supports only 8 bpp grayscale images and 24/32 bpp color images so far.
The specified pixel coordinate is out of image's bounds.
Pixel format of this image is not supported by the method.
Collect pixel values from the specified list of coordinates.
List of coordinates to collect pixels' value from.
Returns array of pixels' values from the specified coordinates.
The method goes through the specified list of points and for each point retrievs
corresponding pixel's value from the unmanaged image.
For grayscale image the output array has the same length as number of points in the
specified list of points. For color image the output array has triple length, containing pixels'
values in RGB order.
The method does not make any checks for valid coordinates and leaves this up to user.
If specified coordinates are out of image's bounds, the result is not predictable (crash in most cases).
This method is supposed for images with 16 bpp channels only (16 bpp grayscale image and
48/64 bpp color images).
Unsupported pixel format of the source image. Use Collect8bppPixelValues() method for
images with 8 bpp channels.
Converts the image into a sequence of bytes.
Vertical intensity statistics.
The class provides information about vertical distribution
of pixel intensities, which may be used to locate objects, their centers, etc.
The class accepts grayscale (8 bpp indexed and 16 bpp) and color (24, 32, 48 and 64 bpp) images.
In the case of 32 and 64 bpp color images, the alpha channel is not processed - statistics is not
gathered for this channel.
Sample usage:
// collect statistics
VerticalIntensityStatistics vis = new VerticalIntensityStatistics( sourceImage );
// get gray histogram (for grayscale image)
Histogram histogram = vis.Gray;
// output some histogram's information
System.Diagnostics.Debug.WriteLine( "Mean = " + histogram.Mean );
System.Diagnostics.Debug.WriteLine( "Min = " + histogram.Min );
System.Diagnostics.Debug.WriteLine( "Max = " + histogram.Max );
Sample grayscale image with its vertical intensity histogram:
Histogram for red channel.
Histogram for green channel.
Histogram for blue channel.
Histogram for gray channel (intensities).
Value wich specifies if the processed image was color or grayscale.
If the property equals to true, then the
property should be used to retrieve histogram for the processed grayscale image.
Otherwise , and property
should be used to retrieve histogram for particular RGB channel of the processed
color image.
Initializes a new instance of the class.
Source image.
Unsupported pixel format of the source image.
Initializes a new instance of the class.
Source image data.
Unsupported pixel format of the source image.
Initializes a new instance of the class.
Source unmanaged image.
Unsupported pixel format of the source image.
Gather vertical intensity statistics for specified image.
Source image.
Border following algorithm for contour extraction.
// Create a new border following algorithm
BorderFollowing bf = new BorderFollowing();
// Get all points in the contour of the image.
List<IntPoint> contour = bf.FindContour(grayscaleImage);
// Mark all points in the contour point list in blue
new PointsMarker(contour, Color.Blue).ApplyInPlace(image);
// Show the result
ImageBox.Show(image);
The resulting image is shown below.
Gets or sets the pixel value threshold above which a pixel
is considered white (belonging to the object). Default is zero.
Initializes a new instance of the class.
Initializes a new instance of the class.
The pixel value threshold above which a pixel
is considered black (belonging to the object). Default is zero.
Extracts the contour from a single object in a grayscale image.
A grayscale image.
A list of s defining a contour.
Extracts the contour from a single object in a grayscale image.
A grayscale image.
A list of s defining a contour.
Extracts the contour from a single object in a grayscale image.
A grayscale image.
A list of s defining a contour.
Common interface for contour extraction algorithm.
Extracts the contour from a single object in a grayscale image.
A grayscale image.
A list of s defining a contour.
Extracts the contour from a single object in a grayscale image.
A grayscale image.
A list of s defining a contour.
Extracts the contour from a single object in a grayscale image.
A grayscale image.
A list of s defining a contour.
Contains classes and methods to convert between different image representations,
such as between common images, numeric matrices and arrays.
The image converters are able to convert to and from images defined as byte,
double and float multi-dimensional matrices, jagged matrices, and even
images represented as flat arrays. It is also possible to convert images defined as
series of individual pixel colors into s, and back from those
s into any of the aforementioned representations. Support for
AForge.NET's UnmanagedImage is also available.
The namespace class diagram is shown below.
Jagged array to Bitmap converter.
This class can convert double and float arrays to either Grayscale
or color Bitmap images. Color images should be represented as an
array of pixel values for the final image. The actual dimensions
of the image should be specified in the class constructor.
When this class is converting from or
, the values of the
and properties are ignored and no scaling operation
is performed.
This example converts a single array of double-precision floating-
point numbers with values from 0 to 1 into a grayscale image.
// Create an array representation
// of a 4x4 image with a inner 2x2
// square drawn in the middle
double[] pixels =
{
0, 0, 0, 0,
0, 1, 1, 0,
0, 1, 1, 0,
0, 0, 0, 0,
};
// Create the converter to create a Bitmap from the array
ArrayToImage conv = new ArrayToImage(width: 4, height: 4);
// Declare an image and store the pixels on it
Bitmap image; conv.Convert(pixels, out image);
// Show the image on screen
image = new ResizeNearestNeighbor(320, 320).Apply(image);
ImageBox.Show(image, PictureBoxSizeMode.Zoom);
The resulting image is shown below.
Gets or sets the maximum double value in the
double array associated with the brightest color.
Gets or sets the minimum double value in the
double array associated with the darkest color.
Gets or sets the height of the image
stored in the double array.
Gets or sets the width of the image
stored in the double array.
Initializes a new instance of the class.
The width of the image to be created.
The height of the image to be created.
Initializes a new instance of the class.
The width of the image to be created.
The height of the image to be created.
The minimum double value in the double array
associated with the darkest color. Default is 0.
The maximum double value in the double array
associated with the brightest color. Default is 1.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
For byte transformations, the Min and Max properties
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
For byte transformations, the Min and Max properties
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
For byte transformations, the Min and Max properties
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
For byte transformations, the Min and Max properties are ignored. The
resulting image from upon calling this method will always be 32-bit ARGB.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
For byte transformations, the Min and Max properties
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Public interface for image converter algorithms.
Input image type.
Output image type.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Multidimensional array to Bitmap converter.
This class can convert double and float multidimensional arrays
(matrices) to Grayscale bitmaps. The color representation of the
values contained in the matrices must be specified through the
Min and Max properties of the class or class constructor.
This example converts a multidimensional array of double-precision
floating-point numbers with values from 0 to 1 into a grayscale image.
// Create a matrix representation
// of a 4x4 image with a inner 2x2
// square drawn in the middle
double[,] pixels =
{
{ 0, 0, 0, 0 },
{ 0, 1, 1, 0 },
{ 0, 1, 1, 0 },
{ 0, 0, 0, 0 },
};
// Create the converter to convert the matrix to a image
MatrixToImage conv = new MatrixToImage(min: 0, max: 1);
// Declare an image and store the pixels on it
Bitmap image; conv.Convert(pixels, out image);
// Show the image on screen
image = new ResizeNearestNeighbor(320, 320).Apply(image);
ImageBox.Show(image, PictureBoxSizeMode.Zoom);
The resulting image is shown below.
Gets or sets the maximum double value in the
double array associated with the brightest color.
Gets or sets the minimum double value in the
double array associated with the darkest color.
Gets or sets the desired output format of the image.
Initializes a new instance of the class.
The minimum double value in the double array
associated with the darkest color. Default is 0.
The maximum double value in the double array
associated with the brightest color. Default is 1.
Initializes a new instance of the class.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Bitmap to jagged array converter.
This class converts images to single or jagged arrays of
either double-precision or single-precision floating-point
values.
This example converts a 16x16 Bitmap image into
a double[] array with values between 0 and 1.
// Obtain a 16x16 bitmap image
// Bitmap image = ...
// Show on screen
ImageBox.Show(image, PictureBoxSizeMode.Zoom);
// Create the converter to convert the image to an
// array containing only values between 0 and 1
ImageToArray conv = new ImageToArray(min: 0, max: 1);
// Convert the image and store it in the array
double[] array; conv.Convert(image, out array);
// Show the array on screen
ImageBox.Show(array, 16, 16, PictureBoxSizeMode.Zoom); ///
The resulting image is shown below.
Gets or sets the maximum double value in the
double array associated with the brightest color.
Gets or sets the minimum double value in the
double array associated with the darkest color.
Gets or sets the channel to be extracted.
Initializes a new instance of the class.
The minimum double value in the double array
associated with the darkest color. Default is 0.
The maximum double value in the double array
associated with the brightest color. Default is 1.
The channel to extract. Default is 0.
Initializes a new instance of the class.
Initializes a new instance of the class.
The minimum double value in the double array
associated with the darkest color. Default is 0.
The maximum double value in the double array
associated with the brightest color. Default is 1.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Bitmap to multidimensional matrix converter.
This class converts images to multidimensional matrices of
either double-precision or single-precision floating-point
values.
This example converts a 16x16 Bitmap image into
a double[,] array with values between 0 and 1.
// Obtain an image
// Bitmap image = ...
// Show on screen
ImageBox.Show(image, PictureBoxSizeMode.Zoom);
// Create the converter to convert the image to a
// matrix containing only values between 0 and 1
ImageToMatrix conv = new ImageToMatrix(min: 0, max: 1);
// Convert the image and store it in the matrix
double[,] matrix; conv.Convert(image, out matrix);
// Show the matrix on screen as an image
ImageBox.Show(matrix, PictureBoxSizeMode.Zoom);
The resulting image is shown below.
Additionally, the image can also be shown in alternative
representations such as text or data tables.
// Show the matrix on screen as a .NET multidimensional array
MessageBox.Show(matrix.ToString(CSharpMatrixFormatProvider.InvariantCulture));
// Show the matrix on screen as a table
DataGridBox.Show(matrix, nonBlocking: true)
.SetAutoSizeColumns(DataGridViewAutoSizeColumnsMode.Fill)
.SetAutoSizeRows(DataGridViewAutoSizeRowsMode.AllCellsExceptHeaders)
.SetDefaultFontSize(5)
.WaitForClose();
The resulting images are shown below.
Gets or sets the maximum double value in the
double array associated with the brightest color.
Gets or sets the minimum double value in the
double array associated with the darkest color.
Gets or sets the channel to be extracted.
Initializes a new instance of the class.
The minimum double value in the double array
associated with the darkest color. Default is 0.
The maximum double value in the double array
associated with the brightest color. Default is 1.
The channel to extract. Default is 0.
Initializes a new instance of the class.
Initializes a new instance of the class.
The minimum double value in the double array
associated with the darkest color. Default is 0.
The maximum double value in the double array
associated with the brightest color. Default is 1.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Converts an image from one representation to another. When
converting to byte, the and
are ignored.
The input image to be converted.
The converted image.
Standard feature descriptor for feature vectors.
Gets or sets the descriptor vector
associated with this point.
Initializes a new instance of the structure.
The feature vector.
Performs an implicit conversion from
to .
The value to be converted.
The result of the conversion.
Performs a conversion from
to .
Performs an implicit conversion from
to .
The value to be converted.
The result of the conversion.
Performs a conversion from
to .
Performs a conversion from
to .
Implements the operator ==.
Implements the operator !=.
Determines whether the specified is equal to this instance.
The to compare with this instance.
true if the specified is equal to this instance; otherwise, false.
Returns a hash code for this instance.
A hash code for this instance, suitable for use in hashing
algorithms and data structures like a hash table.
Standard feature descriptor for generic feature vectors.
The type of feature vector, such as .
Gets or sets the descriptor vector
associated with this point.
Initializes a new instance of the struct.
The feature vector.
Performs an implicit conversion from
to .
The value to be converted.
The result of the conversion.
Implements the operator ==.
Implements the operator !=.
Determines whether the specified is equal to this instance.
The to compare with this instance.
true if the specified is equal to this instance; otherwise, false.
Returns a hash code for this instance.
A hash code for this instance, suitable for use in hashing
algorithms and data structures like a hash table.
Objective Fidelity Criteria.
References:
-
H.T. Yalazan, J.D. Yucel. "A new objective fidelity criterion
for image processing." Proceedings of the 16th International
Conference on Pattern Recognition, 2002.
Bitmap ori = ... // Original picture
Bitmap recon = ... // Reconstructed picture
// Create a new Objective fidelity comparer:
var of = new ObjectiveFidelity(ori, recon);
// Get the results
long errorTotal = of.ErrorTotal;
double msr = of.MeanSquareError;
double snr = of.SignalToNoiseRatio;
double psnr = of.PeakSignalToNoiseRatio;
double dsnr = of.DerivativeSignalNoiseRatio;
Gets the total error between the two images.
Gets the average error between the two images.
Gets the root mean square error between the two images.
Gets the signal to noise ratio.
Gets the peak signal to noise ratio.
Gets the derivative signal to noise ratio.
Gets the level used in peak signal to noise ratio.
Initializes a new instance of the class.
Initializes a new instance of the class.
The first image to be compared.
The second image that will be compared.
Initializes a new instance of the class.
The first image to be compared.
The second image that will be compared.
Initializes a new instance of the class.
The first image to be compared.
The second image that will be compared.
Compute objective fidelity metrics.
The first image to be compared.
The second image that will be compared.
Compute objective fidelity metrics.
The first image to be compared.
The second image that will be compared.
Compute objective fidelity metrics.
The first image to be compared.
The second image that will be compared.
Extension methods for drawwing structures.
Convert the given hyperrectangle in to a System.Drawing.Rectangle.
Convert the given hyperrectangle in to a System.Drawing.RectangleF.
Convert the given System.Drawing.Rectangle to a .
Convert the given System.Drawing.RectangleF to a .
Static tool functions for imaging.
Computes the sum of all pixels
within a given image region.
The image region.
The region width.
The region height.
The image stride.
The sum of all pixels within the region.
Computes the mean pixel value
within a given image region.
The image region.
The region width.
The region height.
The image stride.
The mean pixel value within the region.
Computes the pixel scatter
within a given image region.
The image region.
The region width.
The region height.
The image stride.
The region pixel mean.
The scatter value within the region.
Computes the pixel variance
within a given image region.
The image region.
The region width.
The region height.
The image stride.
The region pixel mean.
The variance value within the region.
Co-occurrence Degree.
Find co-occurrences at 0° degrees.
Find co-occurrences at 45° degrees.
Find co-occurrences at 90° degrees.
Find co-occurrences at 135° degrees.
Gray-Level Co-occurrence Matrix (GLCM).
A co-occurrence matrix or co-occurrence distribution is a matrix that is defined over an image to
be the distribution of co-occurring pixel values (grayscale values, or colors) at a given offset.
Any matrix or pair of matrices can be used to generate a co-occurrence matrix, though their most
common application has been in measuring texture in images, so the typical definition, as above,
assumes that the matrix is an image. It is also possible to define the matrix across two different
images.Such a matrix can then be used for color mapping.
References:
-
Mryka Hall-Beyer, "The GLCM Tutorial Home Page", The GLCM Tutorial Home Page.
Available in: http://www.fp.ucalgary.ca/mhallbey/tutorial.htm
-
Wikipedia contributors. "Co-occurrence matrix." Wikipedia, The Free Encyclopedia.
Wikipedia, The Free Encyclopedia, 7 Sep. 2016. Web. 27 Jan. 2017. Available in
https://en.wikipedia.org/wiki/Co-occurrence_matrix
Gray-level Cooccurrence matrices can be computed directly from images:
These matrices also play a major role in the computation of descriptors. For
more examples, including on how to use those matrices for image classification, please see
and documentation pages.
Gets or sets whether the maximum value of gray should be
automatically computed from the image. If set to false,
the maximum gray value will be assumed 255.
Gets or sets whether the produced GLCM should be normalized,
dividing each element by the number of pairs. Default is true.
true if the GLCM should be normalized; otherwise, false.
Gets or sets the direction at which the co-occurrence should
be found. Default is .
Gets or sets the distance at which the
texture should be analyzed. Default is 1.
Gets the number of pairs registered during the
last computed GLCM.
Initializes a new instance of the class.
Initializes a new instance of the class.
The distance at which the texture should be analyzed. Default is 1.
The direction to look for co-occurrences. Default is .
Initializes a new instance of the class.
The distance at which the texture should be analyzed. Default is 1.
The direction to look for co-occurrences. Default is .
Whether the maximum value of gray should be
automatically computed from the image. Default is true.
Whether the produced GLCM should be normalized,
dividing each element by the number of pairs. Default is true.
Computes the Gray-level Co-occurrence Matrix (GLCM)
for the given source image.
The source image.
A square matrix of double-precision values containing
the GLCM for the given .
Computes the Gray-level Co-occurrence Matrix (GLCM)
for the given source image.
The source image.
A square matrix of double-precision values containing
the GLCM for the given .
Computes the Gray-level Co-occurrence Matrix (GLCM)
for the given source image.
The source image.
A square matrix of double-precision values containing
the GLCM for the given .
Computes the Gray-level Co-occurrence Matrix for the given matrix.
The source image.
A region of the source image where
the GLCM should be computed for.
A square matrix of double-precision values containing the GLCM for the
of the given .
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Gray-Level Difference Method (GLDM).
Computes an gray-level histogram of difference
values between adjacent pixels in an image.
Gets or sets whether the maximum value of gray should be
automatically computed from the image. If set to false,
the maximum gray value will be assumed 255.
Gets or sets the direction at which the co-occurrence should be found.
Initializes a new instance of the class.
The direction at which the co-occurrence should be found.
Initializes a new instance of the class.
The direction at which the co-occurrence should be found.
Whether the maximum value of gray should be
automatically computed from the image. Default is true.
Computes the Gray-level Difference Method (GLDM)
Histogram for the given source image.
The source image.
An histogram containing co-occurrences
for every gray level in .
Gray-Level Run-Length Matrix.
Gets or sets whether the maximum value of gray should be
automatically computed from the image. If set to false,
the maximum gray value will be assumed 255.
Gets or sets the direction at which the co-occurrence should be found.
Gets the number of primitives found in the last
call to .
Initializes a new instance of the class.
The direction at which the co-occurrence should be found.
Initializes a new instance of the class.
The direction at which the co-occurrence should be found.
Whether the maximum value of gray should be
automatically computed from the image. Default is true.
Computes the Gray-level Run-length for the given image source.
The source image.
An array of run-length vectors containing level counts
for every width pixel in .
Common interface for feature points.
Gets or sets the x-coordinate of this point.
Gets or sets the y-coordinate of this point.
Common interface for feature points.
's operation modes.
Features will be combined using
.
Features will be combined using
.
Features will be combined using
.
Features will be combined using
.
Haralick textural feature extractor.
Haralick's texture features are based on measures derived from
Gray-level Co-occurrence
matrices (GLCM).
Whether considering the intensity or grayscale values of the image
or various dimensions of color, the co-occurrence matrix can measure
the texture of the image. Because co-occurrence matrices are typically
large and sparse, various metrics of the matrix are often taken to get
a more useful set of features. Features generated using this technique
are usually called Haralick features, after R. M. Haralick, attributed to
his paper Textural features for image classification (1973).
This class can extract s from different
regions of an image using a pre-defined cell size. For more information
about which features are computed, please see documentation for the
class.
References:
-
Wikipedia Contributors, "Co-occurrence matrix". Available at
http://en.wikipedia.org/wiki/Co-occurrence_matrix
-
Robert M Haralick, K Shanmugam, Its'hak Dinstein; "Textural
Features for Image Classification". IEEE Transactions on Systems, Man,
and Cybernetics. SMC-3 (6): 610–621, 1973. Available at:
http://www.makseq.com/materials/lib/Articles-Books/Filters/Texture/Co-occurrence/haralick73.pdf
The first example shows how to extract Haralick descriptors given an image.
Input image:
The second example shows how to use the Haralick feature extractor as part of a
Bag-of-Words model in order to perform texture image classification:
Gets the size of a cell, in pixels. A value of 0 means the
cell will have the size of the image. Default is 0 (uses the
entire image).
Gets the s which should
be computed by this Haralick textural feature extractor.
Default is .
Gets or sets the mode of operation of this
Haralick's textural
feature extractor.
The mode determines how the different features captured
by the are combined.
A value from the enumeration
specifying how the different features should be combined.
Gets or sets the number of features to extract using
the . By default, only
the first 13 original Haralick's features will be used.
Gets the set of local binary patterns computed for each
cell in the last call to .
Gets the Gray-level
Co-occurrence Matrix (GLCM) generated during the last
call to .
Gets or sets whether to normalize final
histogram feature vectors. Default is false.
Initializes a new instance of the class.
The angulation degrees on which the Haralick's
features should be computed. Default is to use all directions.
Initializes a new instance of the class.
The size of a computing cell, measured in pixels.
Default is 0 (use whole image at once).
Whether to normalize generated
histograms. Default is false.
Initializes a new instance of the class.
The size of a computing cell, measured in pixels.
Default is 0 (use whole image at once).
Whether to normalize generated
histograms. Default is true.
The angulation degrees on which the Haralick's
features should be computed. Default is to use all directions.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Creates a new object that is a copy of the current instance.
Haralick's Texture Features.
Haralick's texture features are based on measures derived from
Gray-level Co-occurrence
matrices (GLCM).
Whether considering the intensity or grayscale values of the image
or various dimensions of color, the co-occurrence matrix can measure
the texture of the image. Because co-occurrence matrices are typically
large and sparse, various metrics of the matrix are often taken to get
a more useful set of features. Features generated using this technique
are usually called Haralick features, after R. M. Haralick, attributed to
his paper Textural features for image classification (1973).
This class encompasses most of the features derived on Haralick's original
paper. All features are lazy-evaluated until needed; but may also be
combined in a single feature vector by calling .
References:
-
Wikipedia Contributors, "Co-occurrence matrix". Available at
http://en.wikipedia.org/wiki/Co-occurrence_matrix
-
Robert M Haralick, K Shanmugam, Its'hak Dinstein; "Textural
Features for Image Classification". IEEE Transactions on Systems, Man,
and Cybernetics. SMC-3 (6): 610–621, 1973. Available at:
http://www.makseq.com/materials/lib/Articles-Books/Filters/Texture/Co-occurrence/haralick73.pdf
For a complete example on how to use , please refer to
the documentation of the main Haralick class.
Initializes a new instance of the class.
The co-occurrence matrix to compute features from.
Gets the number of gray levels in the
original image. This is the number of
dimensions of the co-occurrence matrix.
Gets the matrix sum.
Gets the matrix mean μ.
Gets the marginal probability vector
obtained by summing the rows of p(i,j),
given as px(i) = Σj p(i,j).
Gets the marginal probability vector
obtained by summing the columns of p(i,j),
given as py(j) = Σi p(i,j).
Gets μx, the mean value of the
vector.
Gets μ_y, the mean value of the
vector.
Gets σx, the variance of the
vector.
Gets σy, the variance of the
vector.
Gets Hx, the entropy of the
vector.
Gets Hy, the entropy of the
vector.
Gets p(x+y)(k), the sum
of elements whose indices sum to k.
Gets p(x-y) (k), the sum of elements
whose absolute indices diferences equals to k.
Gets Haralick's first textural feature,
the Angular Second Momentum.
Gets Haralick's second textural feature,
the Contrast.
Gets Haralick's third textural feature,
the Correlation.
Gets Haralick's fourth textural feature,
the Sum of Squares: Variance.
Gets Haralick's fifth textural feature,
the Inverse Difference Moment.
Gets Haralick's sixth textural feature,
the Sum Average.
Gets Haralick's seventh textural feature,
the Sum Variance.
Gets Haralick's eighth textural feature,
the Sum Entropy.
Gets Haralick's ninth textural feature,
the Entropy.
Gets Haralick's tenth textural feature,
the Difference Variance.
Gets Haralick's eleventh textural feature,
the Difference Entropy.
Gets Haralick's twelfth textural feature,
the First Information Measure.
Gets Haralick's thirteenth textural feature,
the Second Information Measure.
Gets Haralick's fourteenth textural feature,
the Maximal Correlation Coefficient.
Gets Haralick's first textural feature, the
Angular Second Momentum, also known as Energy
or Homogeneity.
Gets a variation of Haralick's second textural feature,
the Contrast with Absolute values (instead of squares).
Gets Haralick's second textural feature,
the Contrast.
Gets Haralick's third textural feature,
the Correlation.
Gets Haralick's fourth textural feature,
the Sum of Squares: Variance.
Gets Haralick's fifth textural feature, the Inverse
Difference Moment, also known as Local Homogeneity.
Can be regarded as a complement to .
Gets a variation of Haralick's fifth textural feature,
the Texture Homogeneity. Can be regarded as a complement
to .
Gets Haralick's sixth textural feature,
the Sum Average.
Gets Haralick's seventh textural feature,
the Sum Variance.
Gets Haralick's eighth textural feature,
the Sum Entropy.
Gets Haralick's ninth textural feature,
the Entropy.
Gets Haralick's tenth textural feature,
the Difference Variance.
Gets Haralick's eleventh textural feature,
the Difference Entropy.
Gets Haralick's twelfth textural feature,
the First Information Measure.
Gets Haralick's thirteenth textural feature,
the Second Information Measure.
Gets Haralick's fourteenth textural feature,
the Maximal Correlation Coefficient.
Gets the Cluster Shade textural feature.
Gets the Cluster Prominence textural feature.
Creates a feature vector with
the chosen feature functions.
How many features to include in the vector. Default is 13.
A vector with Haralick's features up
to the given number passed as input.
Local Binary Patterns.
Local binary patterns (LBP) is a type of feature used for classification
in computer vision. LBP is the particular case of the Texture Spectrum
model proposed in 1990. LBP was first described in 1994. It has since
been found to be a powerful feature for texture classification; it has
further been determined that when LBP is combined with the Histogram of
oriented gradients (HOG) classifier, it improves the detection performance
considerably on some datasets.
References:
-
Wikipedia Contributors, "Local Binary Patterns". Available at
http://en.wikipedia.org/wiki/Local_binary_patterns
The first example shows how to extract LBP descriptors given an image.
Input image:
The second example shows how to use the LBP feature extractor as part of a
Bag-of-Words model in order to perform texture image classification:
Gets the size of a cell, in pixels. Default is 6.
Gets the size of a block, in pixels. Default is 3.
Gets the set of local binary patterns computed for each
pixel in the last call to to .
Gets the histogram computed at each cell.
Gets or sets whether to normalize final
histogram feature vectors. Default is true.
Initializes a new instance of the class.
The size of a block, measured in cells. Default is 3.
The size of a cell, measured in pixels. If set to zero, the entire
image will be used at once, forming a single block. Default is 6.
Whether to normalize generated histograms. Default is true.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Creates a new object that is a copy of the current instance.
SURF Feature descriptor types.
Do not compute descriptors.
Compute standard 512-bit descriptors.
Compute extended 1024-bit descriptors.
Fast Retina Keypoint (FREAK) detector.
The FREAK algorithm is a binary based interest point descriptor algorithm
that relies in another corner
In the following example, we will see how can we extract binary descriptor
vectors from a given image using the Fast Retina Keypoint Detector together
a FAST corners detection algorithm.
Bitmap lena = Resources.lena512;
// The freak detector can be used with any other corners detection
// algorithm. The default corners detection method used is the FAST
// corners detection. So, let's start creating this detector first:
//
var detector = new FastCornersDetector(60);
// Now that we have a corners detector, we can pass it to the FREAK
// feature extraction algorithm. Please note that if we leave this
// parameter empty, FAST will be used by default.
//
var freak = new FastRetinaKeypointDetector(detector);
// Now, all we have to do is to process our image:
List<FastRetinaKeypoint> points = freak.ProcessImage(lena);
// Afterwards, we should obtain 83 feature points. We can inspect
// the feature points visually using the FeaturesMarker class as
//
FeaturesMarker marker = new FeaturesMarker(points, scale: 20);
// And showing it on screen with
ImageBox.Show(marker.Apply(lena));
// We can also inspect the feature vectors (descriptors) associated
// with each feature point. In order to get a descriptor vector for
// any given point, we can use
//
byte[] feature = points[42].Descriptor;
// By default, feature vectors will have 64 bytes in length. We can also
// display those vectors in more readable formats such as HEX or base64
//
string hex = points[42].ToHex();
string b64 = points[42].ToBase64();
// The above base64 result should be:
//
// "3W8M/ev///ffbr/+v3f34vz//7X+f0609v//+++/1+jfq/e83/X5/+6ft3//b4uaPZf7ePb3n/P93/rIbZlf+g=="
//
The resulting image is shown below:
Gets the corners detector used to generate features.
Gets or sets a value indicating whether all feature points
should have their descriptors computed after being detected.
Default is to compute standard descriptors.
true if to compute orientation; otherwise, false.
Gets or sets the number of octaves to use when
building the feature descriptor. Default is 4.
Gets or sets the scale used when building
the feature descriptor. Default is 22.
Initializes a new instance of the class.
The detection threshold for the
FAST detector.
Initializes a new instance of the class.
Initializes a new instance of the class.
A corners detector.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Gets the
feature descriptor for the last processed image.
Creates a new object that is a copy of the current instance.
Fast Retina Keypoint (FREAK) point.
In order to extract feature points from an image using FREAK,
please take a look on the
documentation page.
Initializes a new instance of the class.
The x-coordinate of the point in the image.
The y-coordinate of the point in the image.
Gets or sets the x-coordinate of this point.
Gets or sets the y-coordinate of this point.
Gets or sets the scale of the point.
Gets or sets the orientation of this point in angles.
Gets or sets the descriptor vector
associated with this point.
Converts the binary descriptor to
string of hexadecimal values.
A string containing an hexadecimal
value representing this point's descriptor.
Converts the binary descriptor
to a string of binary values.
A string containing a binary value
representing this point's descriptor.
Converts the binary descriptor to base64.
A string containing the base64
representation of the descriptor.
Converts the feature point to a .
Converts this object into a .
The result of the conversion.
Converts this object into a .
The result of the conversion.
Performs an implicit conversion from
to .
The point to be converted.
The result of the conversion.
Performs an implicit conversion from
to .
The point to be converted.
The result of the conversion.
Performs an implicit conversion from
to .
The point to be converted.
The result of the conversion.
Fast Retina Keypoint (FREAK) descriptor.
Based on original implementation by A. Alahi, R. Ortiz, and P.
Vandergheynst, distributed under a BSD style license.
In order to extract feature points from an image using FREAK,
please take a look on the
documentation page.
References:
-
A. Alahi, R. Ortiz, and P. Vandergheynst. FREAK: Fast Retina Keypoint. In IEEE Conference on
Computer Vision and Pattern Recognition, CVPR 2012 Open Source Award Winner.
Gets or sets whether the orientation is normalized.
Gets or sets whether the scale is normalized.
Gets or sets whether to compute the standard 512-bit
descriptors or extended 1024-bit
Gets the of
the original source's feature detector.
The integral image from where the
features have been detected.
Gets the of
the original source's feature detector.
The integral image from where the
features have been detected.
Initializes a new instance of the class.
Describes the specified point (i.e. computes and
sets the orientation and descriptor vector fields
of the .
The point to be described.
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Pattern scale resolution.
Pattern orientation resolution.
Number of pattern points.
Smallest keypoint size.
Look-up table for the pattern points (position +
sigma of all points at all scales and orientation)
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Histograms of Oriented Gradients (HOG) descriptor extractor.
References:
-
Navneet Dalal and Bill Triggs, "Histograms of Oriented Gradients for Human Detection",
CVPR 2005. Available at:
http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf
The first example shows how to extract HOG descriptors from a standard test image:
The second example shows how to use HOG descriptors as part of a BagOfVisualWords (BoW) pipeline
for image classification:
Gets the size of a cell, in pixels. Default is 6.
Gets the size of a block, in pixels. Default is 3.
Gets the number of histogram bins. Default is 9.
Gets the width of the histogram bin. This property is
computed as (2.0 * System.Math.PI) / numberOfBins.
Gets the matrix of orientations generated in
the last call to .
Gets the matrix of magnitudes generated in
the last call to .
Gets the histogram computed at each cell.
Gets or sets whether to normalize final
histogram feature vectors. Default is true.
Initializes a new instance of the class.
Initializes a new instance of the class.
The number of histogram bins.
The size of a block, measured in cells.
The size of a cell, measured in pixels.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Creates a new object that is a copy of the current instance.
Response filter.
In SURF, the scale-space is divided into a number of octaves,
where an octave refers to a series of
response maps covering a doubling of scale.
In the traditional approach to constructing a scale-space,
the image size is varied and the Gaussian filter is repeatedly
applied to smooth subsequent layers. The SURF approach leaves
the original image unchanged and varies only the filter size.
Creates the initial map of responses according to
the specified number of octaves and initial step.
Updates the response filter definitions
without recreating objects.
Computes the filter using the specified
Integral Image.
The integral image.
Returns an enumerator that iterates through the collection.
A that can be used to iterate through the collection.
Returns an enumerator that iterates through this collection.
An object that can be used to iterate through the collection.
Response Layer.
Gets the width of the filter.
Gets the height of the filter.
Gets the filter step.
Gets the filter size.
Gets the responses computed from the filter.
Gets the Laplacian computed from the filter.
Initializes a new instance of the class.
Updates the response layer definitions
without recreating objects.
Computes the filter for the specified integral image.
The integral image.
Corner feature point.
Initializes a new instance of the class.
Gets the X position of the point.
Gets the Y position of the point.
Gets the descriptor vector
associated with this point.
Feature detector based on corners.
This class can be used as an adapter for classes implementing
AForge.NET's ICornersDetector interface, so they can be used
where an is needed.
For an example on how to use this class, please take a look
on the example section for BagOfVisualWords{T}.
Gets the corners detector used to generate features.
Initializes a new instance of the class.
A corners detector.
This method should be implemented by inheriting classes to implement the
actual corners detection, transforming the input image into a list of points.
Creates a new object that is a copy of the current instance.
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged
resources; false to release only unmanaged resources.
Common interface for feature detectors (e.g. ,
, ).
The type of the extracted features (e.g. , ]).
Obsolete. See instead.
Obsolete. Please use the method instead.
Obsolete. Please use the method instead.
Obsolete. Please use the method instead.
Hu's set of invariant image moments.
In image processing, computer vision and related fields, an image moment is
a certain particular weighted average (moment) of the image pixels' intensities,
or a function of such moments, usually chosen to have some attractive property
or interpretation.
Image moments are useful to describe objects after segmentation. Simple properties
of the image which are found via image moments include area (or total intensity),
its centroid, and information about its orientation.
Hu's set of invariant moments are invariant under translation, changes in scale,
and also rotation. The first moment, , is analogous to the moment
of inertia around the image's centroid, where the pixels' intensities are analogous
to physical density. The last one, I7, is skew invariant, which enables it to distinguish
mirror images of otherwise identical images.
References:
-
Wikipedia contributors. "Image moment." Wikipedia, The Free Encyclopedia. Wikipedia,
The Free Encyclopedia. Available at http://en.wikipedia.org/wiki/Image_moment
Bitmap image = ...;
// Compute the Hu moments of up to third order
HuMoments hu = new HuMoments(image, order: 3);
Hu moment of order 1.
Hu moment of order 2.
Hu moment of order 3.
Hu moment of order 4.
Hu moment of order 5.
Hu moment of order 6.
Hu moment of order 7.
Initializes a new instance of the class.
The maximum moment order to be computed.
Initializes a new instance of the class.
The maximum moment order to be computed.
The image whose moments should be computed.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
Computes the Hu moments from the specified central moments.
The central moments to use as base of calculations.
Computes the center moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Computes the center moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Base class for image moments.
Gets or sets the maximum order of the moments.
Initializes a new instance of the class.
The maximum order for the moments.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
Computes the moments for the specified image.
The image whose moments should be computed.
Computes the moments for the specified image.
The image whose moments should be computed.
Computes the moments for the specified image.
The image whose moments should be computed.
Computes the moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Computes the moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Computes the moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Computes the moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Central image moments.
In image processing, computer vision and related fields, an image moment is
a certain particular weighted average (moment) of the image pixels' intensities,
or a function of such moments, usually chosen to have some attractive property
or interpretation.
Image moments are useful to describe objects after segmentation. Simple properties
of the image which are found via image moments include area (or total intensity),
its centroid, and information about its orientation.
The central moments can be used to find the location, center of mass and the
dimensions of a given object within an image.
References:
-
Wikipedia contributors. "Image moment." Wikipedia, The Free Encyclopedia. Wikipedia,
The Free Encyclopedia. Available at http://en.wikipedia.org/wiki/Image_moment
Bitmap image = ...;
// Compute the center moments of up to third order
CentralMoments cm = new CentralMoments(image, order: 3);
// Get size and orientation of the image
SizeF size = target.GetSize();
float angle = target.GetOrientation();
Gets the default maximum moment order.
Central moment of order (0,0).
Central moment of order (1,0).
Central moment of order (0,1).
Central moment of order (1,1).
Central moment of order (2,0).
Central moment of order (0,2).
Central moment of order (2,1).
Central moment of order (1,2).
Central moment of order (3,0).
Central moment of order (0,3).
Initializes a new instance of the class.
Initializes a new instance of the class.
The raw moments to construct central moments.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Initializes a new instance of the class.
The maximum order for the moments.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Computes the center moments from the specified raw moments.
The raw moments to use as base of calculations.
Computes the center moments for the specified image.
The image.
The region of interest in the image to compute moments for.
Computes the center moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Gets the size of the ellipse containing the image.
The size of the ellipse containing the image.
Gets the orientation of the ellipse containing the image.
The angle of orientation of the ellipse, in radians.
Gets both size and orientation of the ellipse containing the image.
The angle of orientation of the ellipse, in radians.
The size of the ellipse containing the image.
Common interface for image moments.
Computes the center moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Computes the center moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
Raw image moments.
In image processing, computer vision and related fields, an image moment is
a certain particular weighted average (moment) of the image pixels' intensities,
or a function of such moments, usually chosen to have some attractive property
or interpretation.
Image moments are useful to describe objects after segmentation. Simple properties
of the image which are found via image moments include area (or total intensity),
its centroid, and information about its orientation.
The raw moments are the most basic moments which can be computed from an image,
and can then be further processed to achieve or even
.
References:
-
Wikipedia contributors. "Image moment." Wikipedia, The Free Encyclopedia. Wikipedia,
The Free Encyclopedia. Available at http://en.wikipedia.org/wiki/Image_moment
Bitmap image = ...;
// Compute the raw moments of up to third order
RawMoments m = new RawMoments(image, order: 3);
Gets the default maximum moment order.
Raw moment of order (0,0).
Raw moment of order (1,0).
Raw moment of order (0,1).
Raw moment of order (1,1).
Raw moment of order (2,0).
Raw moment of order (0,2).
Raw moment of order (2,1).
Raw moment of order (1,2).
Raw moment of order (3,0).
Raw moment of order (0,3).
Inverse raw moment of order (0,0).
Gets the X centroid of the image.
Gets the Y centroid of the image.
Gets the area (for binary images) or sum of
gray level (for grayscale images).
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Computes the raw moments for the specified image.
The image whose moments should be computed.
The region of interest in the image to compute moments for.
True to compute second order moments, false otherwise.
Computes the raw moments for the specified image.
The image.
The region of interest in the image to compute moments for.
Computes the raw moments for the specified image.
The image.
The region of interest in the image to compute moments for.
Resets all moments to zero.
Maximum cross-correlation feature point matching algorithm.
This class matches feature points by using a maximum cross-correlation measure.
References:
-
P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing.
School of Computer Science and Software Engineering, The University of Western Australia.
Available in:
http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/Match/matchbycorrelation.m
-
http://www.instructor.com.br/unesp2006/premiados/PauloHenrique.pdf
-
http://siddhantahuja.wordpress.com/2010/04/11/correlation-based-similarity-measures-summary/
Gets or sets the maximum distance to consider
points as correlated. Default is 0 (consider
all points).
Gets or sets the size of the correlation window.
Constructs a new Correlation Matching algorithm.
Constructs a new Correlation Matching algorithm.
Constructs a new Correlation Matching algorithm.
Constructs a new Correlation Matching algorithm.
Matches two sets of feature points computed from the given images.
Matches two sets of feature points computed from the given images.
Matches two sets of feature points computed from the given images.
Matches two sets of feature points computed from the given images.
Constructs the correlation matrix between selected points from two images.
Rows correspond to points from the first image, columns correspond to points
in the second.
Features from Accelerated Segment Test (FAST) corners detector.
In the FAST corner detection algorithm, a pixel is defined as a corner
if (in a circle surrounding the pixel), N or more contiguous pixels are
all significantly brighter then or all significantly darker than the center
pixel. The ordering of questions used to classify a pixel is learned using
the ID3 algorithm.
This detector has been shown to exhibit a high degree of repeatability.
The code is roughly based on the 9 valued FAST corner detection
algorithm implementation in C by Edward Rosten, which has been
published under a 3-clause BSD license and is freely available at:
http://svr-www.eng.cam.ac.uk/~er258/work/fast.html.
References:
-
E. Rosten, T. Drummond. Fusing Points and Lines for High
Performance Tracking, ICCV 2005.
-
E. Rosten, T. Drummond. Machine learning for high-speed
corner detection, ICCV 2005
Bitmap image = ... // Lena's famous picture
// Create a new FAST Corners Detector
FastCornersDetector fast = new FastCornersDetector()
{
Suppress = true, // suppress non-maximum points
Threshold = 40 // less leads to more corners
};
// Process the image looking for corners
List<IntPoint> points = fast.ProcessImage(image);
// Create a filter to mark the corners
PointsMarker marker = new PointsMarker(points);
// Apply the corner-marking filter
Bitmap markers = marker.Apply(image);
// Show on the screen
ImageBox.Show(markers);
The resulting image is shown below:
The second example shows how to extract FAST descriptors from a standard test image:
Initializes a new instance of the class.
The suppression threshold. Decreasing this value
increases the number of points detected by the algorithm. Default is 20.
Gets or sets a value indicating whether non-maximum
points should be suppressed. Default is true.
true if non-maximum points should
be suppressed; otherwise, false.
Gets or sets the corner detection threshold. Increasing this value results in less corners,
whereas decreasing this value will result in more corners detected by the algorithm.
The corners threshold.
Gets the scores of the each corner detected in
the previous call to .
The scores of each last computed corner.
This method should be implemented by inheriting classes to implement the
actual corners detection, transforming the input image into a list of points.
Creates a new object that is a copy of the current instance.
SURF Feature descriptor types.
Do not compute descriptors.
Compute standard descriptors.
Compute extended descriptors.
Speeded-up Robust Features (SURF) detector.
Based on original implementation in the OpenSURF computer vision library
by Christopher Evans (http://www.chrisevansdev.com). Used under the LGPL
with permission of the original author.
Be aware that the SURF algorithm is a patented algorithm by Anael Orlinski.
If you plan to use it in a commercial application, you may have to acquire
a license from the patent holder.
References:
-
E. Christopher. Notes on the OpenSURF Library. Available in:
http://sites.google.com/site/chrisevansdev/files/opensurf.pdf
-
P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing.
School of Computer Science and Software Engineering, The University of Western Australia.
Available in: http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/Spatial/harris.m
The first example shows how to extract SURF descriptors from a standard test image:
The second example shows how to use SURF descriptors as part of a BagOfVisualWords (BoW) pipeline
for image classification:
Initializes a new instance of the class.
The non-maximum suppression threshold. Default is 0.0002f.
The number of octaves to use when building the
response filter. Each octave corresponds to a series of maps covering a
doubling of scale in the image. Default is 5.
The initial step to use when building the
response filter. Default is 2.
Gets or sets a value indicating whether all feature points
should have their orientation computed after being detected.
Default is true.
Computing orientation requires additional processing;
set this property to false to compute the orientation of only
selected points by using the
current feature descriptor for the last set of detected points.
true if to compute orientation; otherwise, false.
Gets or sets a value indicating whether all feature points
should have their descriptors computed after being detected.
Default is to compute standard descriptors.
Computing descriptors requires additional processing;
set this property to false to compute the descriptors of only
selected points by using the
current feature descriptor for the last set of detected points.
true if to compute orientation; otherwise, false.
Gets or sets the non-maximum suppression
threshold. Default is 0.0002.
The non-maximum suppression threshold.
Gets or sets the number of octaves to use when building
the response filter.
Each octave corresponds to a series of maps covering a
doubling of scale in the image. Default is 5.
Gets or sets the initial step to use when building
the response filter.
Default is 2.
This method should be implemented by inheriting classes to implement the
actual feature extraction, transforming the input image into a list of features.
Gets the
feature descriptor for the last processed image.
Creates a new object that is a copy of the current instance.
Releases unmanaged and - optionally - managed resources.
true to release both managed and unmanaged
resources; false to release only unmanaged resources.
Corners measures to be used in .
Original Harris' measure. Requires the setting of
a parameter k (default is 0.04), which may be a
bit arbitrary and introduce more parameters to tune.
Noble's measure. Does not require a parameter
and may be more stable.
Harris Corners Detector.
This class implements the Harris corners detector.
Sample usage:
// create corners detector's instance
HarrisCornersDetector hcd = new HarrisCornersDetector( );
// process image searching for corners
Point[] corners = hcd.ProcessImage( image );
// process points
foreach ( Point corner in corners )
{
// ...
}
References:
-
P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing.
School of Computer Science and Software Engineering, The University of Western Australia.
Available in: http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/Spatial/harris.m
-
C.G. Harris and M.J. Stephens. "A combined corner and edge detector",
Proceedings Fourth Alvey Vision Conference, Manchester.
pp 147-151, 1988.
-
Alison Noble, "Descriptions of Image Surfaces", PhD thesis, Department
of Engineering Science, Oxford University 1989, p45.
Gets or sets the measure to use when detecting corners.
Harris parameter k. Default value is 0.04.
Harris threshold. Default value is 20000.
Gaussian smoothing sigma. Default value is 1.2.
Non-maximum suppression window radius. Default value is 3.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
Initializes a new instance of the class.
This method should be implemented by inheriting classes to implement the
actual corners detection, transforming the input image into a list of points.
Convolution with decomposed 1D kernel.
Creates a new object that is a copy of the current instance.
Joint representation of both Integral Image and Squared Integral Image.
This class provides a unified representation for both
integral images, squared integral images and tilted integral images under
the same class. This class can be used to provide more efficient transformations
whenever all those representations are required at the same time, such as when
using the Viola-Jones (Haar Cascade) object detector.
Using this representation, both structures can be created in a single pass
over the data. This is interesting for real time applications. This class
also accepts a channel parameter indicating the Integral Image should be
computed using a specified color channel. This avoids costly conversions.
Gets the image's width.
Gets the image's height.
Gets the Integral Image for values' sum.
Gets the Integral Image for values' squared sum.
Gets the Integral Image for tilted values' sum.
Constructs a new Integral image of the given size.
Constructs a new Integral image from a Bitmap image.
The source image from where the integral image should be computed.
The representation of
the source image.
Constructs a new Integral image from a Bitmap image.
The source image from where the integral image should be computed.
The image channel to consider in the computations. Default is 0.
The representation of
the source image.
Constructs a new Integral image from a Bitmap image.
The source image from where the integral image should be computed.
True to compute the tilted version of the integral image,
false otherwise. Default is false.
The representation of
the source image.
Constructs a new Integral image from a Bitmap image.
The source image from where the integral image should be computed.
The image channel to consider in the computations. Default is 0.
True to compute the tilted version of the integral image,
false otherwise. Default is false.
The representation of
the source image.
Constructs a new Integral image from a BitmapData image.
The source image from where the integral image should be computed.
The representation of
the source image.
Constructs a new Integral image from a BitmapData image.
The source image from where the integral image should be computed.
The image channel to consider in the computations. Default is 0.
The representation of
the source image.
Constructs a new Integral image from a BitmapData image.
The source image from where the integral image should be computed.
The image channel to consider in the computations. Default is 0.
True to compute the tilted version of the integral image,
false otherwise. Default is false.
The representation of
the source image.
Constructs a new Integral image from a BitmapData image.
The source image from where the integral image should be computed.
True to compute the tilted version of the integral image,
false otherwise. Default is false.
The representation of
the source image.
Constructs a new Integral image from an unmanaged image.
The source image from where the integral image should be computed.
The image channel to consider in the computations. Default is 0.
The representation of
the source image.
Constructs a new Integral image from an unmanaged image.
The source image from where the integral image should be computed.
The representation of
the source image.
Constructs a new Integral image from an unmanaged image.
The source image from where the integral image should be computed.
True to compute the tilted version of the integral image,
false otherwise. Default is false.
The representation of
the source image.
Constructs a new Integral image from an unmanaged image.
The source image from where the integral image should be computed.
The image channel to consider in the computations. Default is 0.
True to compute the tilted version of the integral image,
false otherwise. Default is false.
The representation of
the source image.
Computes the integral image representation from the given image.
Gets the sum of the pixels in a rectangle of the Integral image.
The horizontal position of the rectangle x.
The vertical position of the rectangle y.
The rectangle's height h.
The rectangle's width w.
The sum of all pixels contained in the rectangle, computed
as I[y, x] + I[y + h, x + w] - I[y + h, x] - I[y, x + w].
Gets the sum of the squared pixels in a rectangle of the Integral image.
The horizontal position of the rectangle x.
The vertical position of the rectangle y.
The rectangle's height h.
The rectangle's width w.
The sum of all pixels contained in the rectangle, computed
as I²[y, x] + I²[y + h, x + w] - I²[y + h, x] - I²[y, x + w].
Gets the sum of the pixels in a tilted rectangle of the Integral image.
The horizontal position of the rectangle x.
The vertical position of the rectangle y.
The rectangle's height h.
The rectangle's width w.
The sum of all pixels contained in the rectangle, computed
as T[y + w, x + w + 1] + T[y + h, x - h + 1] - T[y, x + 1] - T[y + w + h, x + w - h + 1].
Performs application-defined tasks associated with freeing,
releasing, or resetting unmanaged resources.
Releases unmanaged resources and performs other cleanup operations
before the is reclaimed by garbage collection.
Releases unmanaged and - optionally - managed resources
true to release both managed
and unmanaged resources; false to release only unmanaged
resources.
Speeded-Up Robust Feature (SURF) Point.
Initializes a new instance of the class.
The x-coordinate of the point in the image.
The y-coordinate of the point in the image.
The point's scale.
The point's laplacian value.
Initializes a new instance of the class.
The x-coordinate of the point in the image.
The y-coordinate of the point in the image.
The point's scale.
The point's laplacian value.
The point's orientation angle.
The point's response value.
Initializes a new instance of the class.
The x-coordinate of the point in the image.
The y-coordinate of the point in the image.
The point's scale.
The point's Laplacian value.
The SURF point descriptor.
The point's orientation angle.
The point's response value.
Gets or sets the x-coordinate of this point.
Gets or sets the y-coordinate of this point.
Gets or sets the scale of the point.
Gets or sets the response of the detected feature (strength).
Gets or sets the orientation of this point
measured anti-clockwise from the x-axis.
Gets or sets the sign of laplacian for this point
(which may be useful for fast matching purposes).
Gets or sets the descriptor vector
associated with this point.
Converts the feature point to a .
Converts this object into a .
The result of the conversion.
Converts this object into a .
The result of the conversion.
Performs an implicit conversion from
to .
The point to be converted.
The result of the conversion.
Performs an implicit conversion from
to .
The point to be converted.
The result of the conversion.
Performs an implicit conversion from
to .
The point to be converted.
The result of the conversion.
Encapsulates a 3-by-3 general transformation matrix
that represents a (possibly) non-linear transform.
Linear transformations are not the only ones that can be represented by
matrices. Using homogeneous coordinates, both affine transformations and
perspective projections on R^n can be represented as linear transformations
on R^n+1 (that is, n+1-dimensional real projective space).
The general transformation matrix has 8 degrees of freedom, as the last
element is just a scale parameter.
Creates a new projective matrix.
Creates a new projective matrix.
Creates a new projective matrix.
Creates a new projective matrix.
Creates a new projective matrix.
Creates a new projective matrix.
Gets the elements of this matrix.
Gets the offset x
Gets the offset y
Gets whether this matrix is invertible.
Gets whether this is an Affine transformation matrix.
Gets whether this is the identity transformation.
Resets this matrix to be the identity.
Returns the inverse matrix, if this matrix is invertible.
Gets the transpose of this transformation matrix.
The transposed version of this matrix, given by H'.
Transforms the given points using this transformation matrix.
Transforms the given points using this transformation matrix.
Multiplies this matrix, returning a new matrix as result.
Compares two objects for equality.
Returns the hash code for this instance.
Double[,] conversion.
Single[,] conversion.
Double[,] conversion.
Single[,] conversion.
Matrix multiplication.
Represents an ordered pair of real x- and y-coordinates and scalar w that defines
a point in a two-dimensional plane using homogeneous coordinates.
In mathematics, homogeneous coordinates are a system of coordinates used in
projective geometry much as Cartesian coordinates are used in Euclidean geometry.
They have the advantage that the coordinates of a point, even those at infinity,
can be represented using finite coordinates. Often formulas involving homogeneous
coordinates are simpler and more symmetric than their Cartesian counterparts.
Homogeneous coordinates have a range of applications, including computer graphics,
where they allow affine transformations and, in general, projective transformations
to be easily represented by a matrix.
References:
-
http://alumnus.caltech.edu/~woody/docs/3dmatrix.html
-
http://simply3d.wordpress.com/2009/05/29/homogeneous-coordinates/
The first coordinate.
The second coordinate.
The inverse scaling factor for X and Y.
Creates a new point.
Creates a new point.
Creates a new point.
Creates a new point.
Transforms a point using a projection matrix.
Normalizes the point to have unit scale.
Gets whether this point is normalized (w = 1).
Gets whether this point is at infinity (w = 0).
Gets whether this point is at the origin.
Converts the point to a array representation.
Multiplication by scalar.
Multiplication by scalar.
Multiplies the point by a scalar.
Subtraction.
Subtracts the values of two points.
Addition.
Add the values of two points.
Equality.
Inequality
PointF Conversion.
Converts to a Integer point by computing the ceiling of the point coordinates.
Converts to a Integer point by rounding the point coordinates.
Converts to a Integer point by truncating the point coordinates.
Compares two objects for equality.
Returns the hash code for this instance.
Returns the empty point.
Speeded-Up Robust Features (SURF) Descriptor.
Gets or sets a value indicating whether the features
described by this should
be invariant to rotation. Default is true.
true for rotation invariant features; false otherwise.
Gets or sets a value indicating whether the features
described by this should
be computed in extended form. Default is false.
true for extended features; false otherwise.
Gets the of
the original source's feature detector.
The integral image from where the
features have been detected.
Initializes a new instance of the class.
The integral image which is the source of the feature points.
Describes the specified point (i.e. computes and
sets the orientation and descriptor vector fields
of the .
The point to be described.
Describes all specified points (i.e. computes and
sets the orientation and descriptor vector fields
of each .
The list of points to be described.
Determine dominant orientation for the feature point.
Determine dominant orientation for feature point.
Construct descriptor vector for this interest point
Get the value of the Gaussian with std dev sigma at the point (x,y)
Get the value of the Gaussian with std dev sigma at the point (x,y)
Gaussian look-up table for sigma = 2.5
Creates a new object that is a copy of the current instance.
A new object that is a copy of this instance.
Static tool functions for imaging.
References:
-
P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing.
School of Computer Science and Software Engineering, The University of Western Australia.
Available in:
http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/Match/matchbycorrelation.m
Computes the center of a given rectangle.
Compares two rectangles for equality, considering an acceptance threshold.
Creates an homography matrix matching points
from a set of points to another.
Creates an homography matrix matching points
from a set of points to another.
Creates the fundamental matrix between two
images from a set of points from each image.
Creates the fundamental matrix between two
images from a set of points from each image.
Creates the fundamental matrix between two
images from a set of points from each image.
Normalizes a set of homogeneous points so that the origin is located
at the centroid and the mean distance to the origin is sqrt(2).
Normalizes a set of homogeneous points so that the origin is located
at the centroid and the mean distance to the origin is sqrt(2).
Normalizes a set of homogeneous points so that the origin is located
at the centroid and the mean distance to the origin is sqrt(2).
Normalizes a set of homogeneous points so that the origin is located
at the centroid and the mean distance to the origin is sqrt(2).
Detects if three points are collinear.
Detects if three points are collinear.
Copies the horizontal and vertical resolution specifications
from a source Bitmap image and stores in a destination image.
Computes the sum of the pixels in a given image.
Computes the sum of the pixels in a given image.
Computes the sum of the pixels in a given image.
Computes the sum of the pixels in a given image.
Computes the sum of the pixels in a given image.
Computes the sum of the pixels in a given image.
Computes the arithmetic mean of the pixels in a given image.
Computes the arithmetic mean of the pixels in a given image.
Computes the arithmetic mean of the pixels in a given image.
Computes the arithmetic mean of the pixels in a given image.
Computes the arithmetic mean of the pixels in a given image.
Computes the arithmetic mean of the pixels in a given image.
Computes the standard deviation of image pixels.
Computes the standard deviation of image pixels.
Computes the standard deviation of image pixels.
Computes the standard deviation of image pixels.
Computes the standard deviation of image pixels.
Computes the standard deviation of image pixels.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the minimum pixel value in the given image.
Computes the minimum pixel value in the given image.
Computes the minimum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Computes the maximum pixel value in the given image.
Converts an image given as a matrix of pixel values into a .
For more options, please use the class.
A matrix containing the grayscale pixel
values as bytes.
A of the same width
and height as the pixel matrix containing the given pixel values.
Converts an image given as a matrix of pixel values into a .
For more options, please use the class.
A matrix containing the grayscale pixel
values as bytes.
A of the same width
and height as the pixel matrix containing the given pixel values.
Converts an image given as a matrix of pixel values into a .
For more options, please use the class.
A matrix containing the grayscale pixel
values as bytes.
A of the same width
and height as the pixel matrix containing the given pixel values.
Converts an image given as a matrix of pixel values into a .
For more options, please use the class.
A matrix containing the grayscale pixel
values as bytes.
A of the same width
and height as the pixel matrix containing the given pixel values.
Converts an image given as a matrix of pixel values into a .
For more options, please use the class.
A matrix containing the grayscale pixel
values as bytes.
A of the same width
and height as the pixel matrix containing the given pixel values.
Converts an image given as a into a matrix of
pixel values.For more options, please use the class.
A image represented as a bitmap.
A matrix containing the values of each pixel in the bitmap.
Converts an image given as a into a matrix of
pixel values.For more options, please use the class.
A image represented as a bitmap.
A matrix containing the values of each pixel in the bitmap.
Converts an image given as a into a matrix of
pixel values.For more options, please use the class.
A image represented as a bitmap.
The color channel to be extracted.
A matrix containing the values of each pixel in the bitmap.
Converts an image given as a into a matrix of
pixel values.For more options, please use the class.
A image represented as a bitmap.
The color channel to be extracted.
A matrix containing the values of each pixel in the bitmap.
Multiplies a point by a transformation matrix.
Multiplies a transformation matrix and a point.
Computes the inner product of two points.
Transforms the given points using this transformation matrix.
Gets the image format most likely associated with a given file name.
The filename in the form "image.jpg".
The most likely associated with
the given .
Locks a Bitmap into system memory.
Locks a Bitmap into system memory and executes an operation with a
that points to this memory location.
Locks a Bitmap into system memory and executes an operation with a
that points to this memory location.