A new type of the intelligent real-time protection device is developed, which is supposed to better adapt to the physical
layer protection of the mesh optical network. By means of converting the optical signal into the electric ones, the
real-time online optical power data of the optical fibers are sampled and processed by the protection device, so the
working state of the mesh optical network is obtained. By communicating with each other, the protection devices placed
at each node of the mesh optical network can figure out the backup lightpath of each primary fiber in advance, and then a
backup lightpath route table will be dynamically established and maintained by each protection device. Whenever the
failure of the primary fiber occurs, the protection device will immediately switch to the backup lightpath by driving the
optical switch matrix according to its backup lightpath route table. The intelligent real-time protection device for mesh
optical network is a kind of real unattended intelligent real-time online monitoring and protection device for the physical
layer of optical network, which is enabled to meet the requirement of the development of the mesh optical network. So
the transparent uninterrupted communication of the mesh optical network could be achievable by applying the proposed
intelligent real-time protection device.
Extracting effective features for texture classification is always a difficult problem in texture analysis. We present a method for texture features extraction by independent component analysis (ICA) of Gabor features (called ICAG). It has three distinguished aspects. First, Gabor wavelet transformation first produces distinct textural features characterized by spatial locality, scale, and orientation selectivity. Second, principal component analysis (PCA) and ICA then reduce the dimensionality and redundancy of these features. Thus, the resulting independent components are taken as texture features for classification. Third, in the ICA procedure, two different frameworks are discussed. Framework I, called ICAG I, regards pixels as random variables and represents them as a column vector by reshaping all the transformed images row by row, while framework II, called ICAG II, treats the statistical features, namely, the mean and standard deviation, as random variables. Thus, the statistical features of all the transformed images construct a column vector. Finally, comparative experiments between ICAG and the other traditional methods—ICA and Gabor wavelets—are performed on two composite images (five narrowband textures and five natural textures with broader bands). The results indicate that ICAG provides the best performance and ICAG II is more efficient and applicable.