In the production process of viscose filament, broken filament inspection is the most important part of detecting filament defects. To solve the problem of low speed and accuracy of broken filament detection and improve the online quality inspection system. In this paper, we design a broken filament detection method for viscose filaments based on the improved YOLOv5 algorithm. The GhostNet network structure is introduced to replace and modify the backbone network layer of YOLOv5 to reduce the complexity and computation of the structure and realize the light weight of the overall network structure; the ECA attention mechanism is introduced in the backbone network to enhance the feature perception of the broken filament target and increase the mobility of the feature information in the deep network. The improved YOLOv5 algorithm achieves an average detection accuracy of 93.9% and an average detection speed of 64 FPS in the final experimental results, which is better than the traditional methods of image recognition detection and can meet the realtime detection requirements of broken filament detection in practical engineering.
KEYWORDS: Optical character recognition, Object detection, Convolution, Tunable filters, Statistical modeling, Deep learning, Target recognition, Signal to noise ratio
In order to realize the rapid detection of stenciled characters on castings in the industrial site, and to better classify and manage the castings, under the influence of complex environmental factors on the industrial site, the stenciled characters of castings have the characteristics of low contrast and inconspicuous edge features, etc., using deep learning. Methods An end-to-end OCR (Optical Character Recognition) casting steel stamp character recognition system was designed. By using the improved TextBoxes detection method on the input image to locate the target character area, combined with the CRNN (Convolutional Recurrent Neural Network) character recognition method to recognize and output the detected stamped characters, and at the same time augment the image data to improve the recognition accuracy. The results show that the model can reach 98.9% in recognition accuracy, and the average inspection time is 0.27S. It is superior to traditional template matching methods and other current mainstream deep learning target detection algorithms in recognition accuracy and speed. In terms of processing natural language information. It provides convenience and provides an effective means of human-computer interaction.
In order to solve the problem that manual discrimination of gastric cancer pathological images is time-consuming and laborious and requires high relevant medical expertise, this paper proposes a method based on improved convolutional neural network to assist recognition of gastric cancer pathological images. In this paper, residual network (Resnet) was used as the basis for optimization and improvement, and the network structure was reasonably adjusted to improve the adaptability of the recognition task. The two-dimension attention mechanism of Convolutional Block Attention Module (CBAM) is introduced to improve the accuracy of the model. The knowledge of transfer learning is used to accelerate the training process and the fitting of network training. In order to increase the generalization of the network model, the experimental images were data enhanced. The experimental results show that in the fifth layer network structure CONV5, replacing the first convolution layer with the attention mechanism module is better than replacing the second convolution layer and replacing both convolution layers at the same time, and the attention mechanism module is a serial arrangement of CAM+SAM. Compared with VGG16, Alextnet and Googlenet, the improved network structure has the highest accuracy of 96.58%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.