Pedestrians involved in roadway accidents account for nearly 12 percent of all traffic fatalities and 59,000 injuries each year. Most injuries occur when pedestrians attempt to cross roads, and there have been noted differences in accident rates midblock vs. at intersections. Collecting data on pedestrian behavior is a time consuming manual process that is prone to error. This leads to a lack of quality information to guide the proper design of lane markings and traffic signals to enhance pedestrian safety. Researchers at the Georgia Tech Research Institute are developing and testing an automated system that can be rapidly deployed for data collection to support the analysis of pedestrian behavior at intersections and midblock crossings with and without traffic signals. This system will analyze the collected video data to automatically identify and characterize the number of pedestrians and their behavior. It consists of a mobile trailer with four high definition pan-tilt cameras for data collection. The software is custom designed and uses state of the art commercial pedestrian detection algorithms. We will be presenting the system hardware and software design, challenges, and results from the preliminary system testing. Preliminary results indicate the ability to provide representative quantitative data on pedestrian motion data more efficiently than current techniques.
Efficient deboning is key to optimizing production yield (maximizing the amount of meat removed from a chicken frame while reducing the presence of bones). Many processors evaluate the efficiency of their deboning lines through manual yield measurements, which involves using a special knife to scrape the chicken frame for any remaining meat after it has been deboned. Researchers with the Georgia Tech Research Institute (GTRI) have developed an automated vision system for estimating this yield loss by correlating image characteristics with the amount of meat left on a skeleton. The yield loss estimation is accomplished by the system’s image processing algorithms, which correlates image intensity with meat thickness and calculates the total volume of meat remaining. The team has established a correlation between transmitted light intensity and meat thickness with an R<sup>2</sup> of 0.94. Employing a special illuminated cone and targeted software algorithms, the system can make measurements in under a second and has up to a 90-percent correlation with yield measurements performed manually. This same system is also able to determine the probability of bone chips remaining in the output product. The system is able to determine the presence/absence of clavicle bones with an accuracy of approximately 95 percent and fan bones with an accuracy of approximately 80%. This paper describes in detail the approach and design of the system, results from field testing, and highlights the potential benefits that such a system can provide to the poultry processing industry.
One technique to better utilize existing roadway infrastructure is the use of HOV and HOT lanes. Technology to monitor the use of these lanes would assist managers and planners in efficient roadway operation. There are no available occupancy detection systems that perform at acceptable levels of accuracy in permanent field installations. The main goal of this research effort is to assess the possibility of determining passenger use with imaging technology. This is especially challenging because of recent changes in the glass types used by car manufacturers to reduce the solar heat load on the vehicles. We describe in this research a system to use multi-plane imaging with appropriate wavelength selection for sensing passengers in the front and rear seats of vehicles travelling in HOV/HOT lanes. The process of determining the geometric relationships needed, the choice of illumination wavelengths, and the appropriate sensors are described, taking into account driver safety considerations. The paper will also cover the design and implementation of the software for performing the window detection and people counting utilizing both image processing and machine learning techniques. The integration of the final system prototype will be described along with the performance of the system operating at a representative location.
Researchers at the Georgia Tech Research Institute designed a vision inspection system for poultry kill line sorting with the potential for process control at various points throughout a processing facility. This system has been successfully operating in a plant for over two and a half years and has been shown to provide multiple benefits. With the introduction of HACCP-Based Inspection Models (HIMP), the opportunity for automated inspection systems to emerge as viable alternatives to human screening is promising. As more plants move to HIMP, these systems have the great potential for augmenting a processing facilities visual inspection process. This will help to maintain a more consistent and potentially higher throughput while helping the plant remain within the HIMP performance standards.
In recent years, several vision systems have been designed to analyze the exterior of a chicken and are capable of identifying Food Safety 1 (FS1) type defects under HIMP regulatory specifications. This means that a reliable vision system can be used in a processing facility as a carcass sorter to automatically detect and divert product that is not suitable for further processing. This improves the evisceration line efficiency by creating a smaller set of features that human screeners are required to identify. This can reduce the required number of screeners or allow for faster processing line speeds.
In addition to identifying FS1 category defects, the Georgia Tech vision system can also identify multiple "Other Consumer Protection" (OCP) category defects such as skin tears, bruises, broken wings, and cadavers. Monitoring this data in an almost real-time system allows the processing facility to address anomalies as soon as they occur. The Georgia Tech vision system can record minute-by-minute averages of the following defects: Septicemia Toxemia, cadaver, over-scald, bruises, skin tears, and broken wings. In addition to these defects, the system also records the length and width information of the entire chicken and different parts such as the breast, the legs, the wings, and the neck. The system also records average color and miss- hung birds, which can cause problems in further processing. Other relevant production information is also recorded including truck arrival and offloading times, catching crew and flock serviceman data, the grower, the breed of chicken, and the number of dead-on- arrival (DOA) birds per truck.
Several interesting observations from the Georgia Tech vision system, which has been installed in a poultry processing plant for several years, are presented. Trend analysis has been performed on the performance of the catching crews and flock serviceman, and the results of the processed chicken as they relate to the bird dimensions and equipment settings in the plant. The results have allowed researchers and plant personnel to identify potential areas for improvement in the processing operation, which should result in improved efficiency and yield.
Most cutting and deboning operations in meat processing require accurate cuts be made to obtain maximum yield and ensure food safety. This is a significant concern for purveyors of deboned product. This task is made more difficult by the variability that is present in most natural products.
The specific application of interest in this paper is the production of deboned poultry breast. This is typically obtained from a cut of the broiler called a 'front half' that includes the breast and the wings. The deboning operation typically consists of a cut that starts at the shoulder joint and then continues along the scapula. Attentive humans with training do a very good job of making this cut. The breast meat is then removed by pulling on the wings. Inaccurate cuts lead to poor yield (amount of boneless meat obtained relative to the weight of the whole carcass) and increase the probability that bone fragments might end up in the product. As equipment designers seek to automate the deboning operation, the cutting task has been a significant obstacle to developing automation that maximizes yield without generating unacceptable levels of bone fragments.
The current solution is to sort the bone-in product into different weight ranges and then to adjust the deboning machines to the average of these weight ranges. We propose an approach for obtaining key cut points by extrapolation from external reference points based on the anatomy of the bird. We show that this approach can be implemented using a stereo imaging system, and the accuracy in locating the cut points of interest is significantly improved. This should result in more accurate cuts and with this concomitantly improved yield while reducing the incidence of bones. We also believe the approach could be extended to the processing of other species.