Variable block size motion compensation significantly improves the rate-distortion performance of video coding at the
cost of high computational complexity. Currently, fast inter-mode decision algorithms only improve the speed of intermode
decision (MD) but do not provide a flexible computational complexity control to adapt to different hardware
platforms with optimized rate-distortion (R-D) performance. Instead of speeding up inter-mode decision merely, we
attempt to propose a complexity adjustable inter-mode decision algorithm to attain optimized coding performance under
different computational complexity constraints herein. Our algorithm predicts the Lagrangian cost and complexity slope
(J-C slope) of MD for each macroblock (MB) by exploiting their temporal and spatial correlations. MD is applied on the
MBs with larger J-C slope to provide the better prediction performance with less computational cost. Adjustable
complexity is obtained by preset the number of MBs on which MD is applied for each frame. According to our
experiments, the algorithm can both freely adjust the computational complexity and provides an improved R-D
performance under different computational constraints.
A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don’t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.
A modeling technique is presented that encapsulates the behavior of a crowd that may or may not become hostile. The various parameters such a slogans shouted, grouping, density, age, occasion, leadership, etc. among many other factors can easily be estimated and submitted to the model. The model will then begin a prediction process that can be corrected as more data is obtained. A predictor corrector process is described that re-guides the prediction process. In addition, we use the Metropolis simulation algorithm with input from the Boltzman weighting factor to determine how individuals within the crowd may be influenced to follow a particular path, and cellular automata to control the sphere of influence by one individual over another. Lastly, we provide some sample output from the model to illustrate the flow of such a dynamical environment.
A technique for representing data obtained from sensors, video streams imagery, sound, text, etc. is presented. The technique is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage where conventional mathematical models don’t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a’priori value for all derived rules. The rules are obtained from an exemplar data set, and are derived by a technique called factoring, and this results in a table of rules called a ruling. These rules can then be used in pattern recognition applications. These techniques are shown to be example as well as a more formal setting, and lastly these rules and ruling are likened to the structure both present and absent in the cerebellum.
As the amount of information in the world is steadily increasing, there is a growing demand for tools for analyzing the information. Many scholars have been working hard to study machine learning in order to obtain knowledge from domain data sets. They hope to find patterns in terms of implicit dependencies in data. Artificial neural networks are efficient computing models which have shown their strengths in solving hard problems in artificial intelligence. They have also been shown to be universal approximators. Some scholars have done much work to interpret neural networks so that they will no longer be seen as black boxes and provided some plots and methods for knowledge acquisition using neural networks. These can be classified into three categories: fuzzy neural networks, CF (certainty factor) based neural networks, and logical neurons. We review some of these research works in this paper.
In this paper, we propose an approach that can generate logical rules from an information system. It is based on Pawlak's rough set theory. There are two steps in our rule generation approach. First, attribute reduction is done on an information table according to Skowron's discernibility matrix and logic function simplification, some important and valuable attributes are extracted. Then, value reduction is performed and corresponding logic rules are generated. All reducts including the minimal reduct of an information system can be obtained through these two reductions. Our approach can generate both the maximal generalized decision rules as well as potential interesting and useful rules according to requirements.