You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Abstract
An important area of electronic image processing is the segmentation of an image into various regions to separate the objects from the background. Image segmentation can be categorized by three methods. The first method is based upon a technique called image thresholding, which uses a predetermined graylevel as a decision criteria to separate an image into different regions based upon the graylevels of the pixels. The second method uses the discontinuities between graylevel regions to detect edges/contours within an image. Edges play a very important role in the extraction of features for object recognition and identification. The final method of image segmentation is to separate an image into several different regions based upon a desired criteria. For example, pixels that are connected and that have the same graylevel are grouped together to form one region.
After an image has been segmented into different objects, it is often desired to describe these objects using a small set of descriptors, thus reducing the complexity of the image recognition process. Since edges play an important role in the recognition of objects within an image, contour description methods have been developed that completely describe an object based upon its contour. The three most common methods are one based upon a coding scheme called chain codes, the use of higher order polynomials to fit a smooth curve to an object's contour, and the use of the Fourier transform and its coefficients to describe the coordinates of an object's contour. An object within an image can also be described using several region descriptors such as its area, perimeter, curvature, height, and width. An object can also be described based upon its surface texture. For example, images of a smooth circular object and a rough circular object can be separated and described solely on the texture difference between them. A set of parameters has been developed that quantifies the description of an object's surface texture.
Online access to SPIE eBooks is limited to subscribing institutions.