When taking pictures, professional photographers apply photographic composition rules, e.g. rule of thirds. The rule of thirds says to place the main subject's center at one of four places: at 1/3 or 2/3 of the picture width from left edge, and 1/3 or 2/3 of the picture height from the top edge. This paper develops low-complexity unsupervised methods for digital still cameras to (1) segment the main subject and (2) realize the rule-of-thirds.
The main subject segmentation method uses the auto-focus filter, opens the shutter aperture fully, and segments the resulting image. These camera settings place the main subject in focus and blur the rest of the image by diffused light. The segmentation utilizes the difference in frequency content between the main subject and blurred background. The segmentation does not depend on prior knowledge of the indoor/outdoor setting or scene content.
The rule-of-thirds method moves the centroid of the main subject to the closest of the four rule-of-thirds locations. We first define an objective function that measures how close the main subject placement obeys the rule-of-thirds, and then reposition the main subject in order to optimize the objective function. For multiple main subjects, the proposed algorithm could be extended to use rule-of-triangles by adding an appropriate constraint.
Conventional grayscale error diffusion halftoning produces worms and other objectionable artifacts. Tone dependent error diffusion (Li and Allebach) reduces these artifacts by controlling the diffusion of quantization errors based on the input graylevel. Li and Allebach optimize error filter weights and thresholds for each (input)
graylevel based on a human visual system model. This paper extends tone dependent error diffusion to color. In color error diffusion, what color to render becomes a major concern in addition to finding optimal dot patterns. We present a visually optimum design approach for input level (tone) dependent error filters (for each color plane).
The resulting halftones reduce traditional error diffusion artifacts and achieve greater accuracy in color rendition.
Grayscale error diffusion introduces nonlinear distortion (directional artifacts and false textures), linear distortion (sharpening), and additive noise. Since error diffusion is 2-D sigma-delta modulation (Anastassiou, 1989), Kite <i>et al</i>. linearize error diffusion by replacing the thresholding quantizer with a scalar gain plus additive noise. Sharpening is proportional to the scalar gain. Kite <i>et al</i>. derive the sharpness control parameter value in threshold modulation (Eschbach and Knox, 1991) to compensate linear distortion. These unsharpened halftones are particularly useful in perceptually weighted SNR measures. False textures at mid-gray (Fan and Eschbach, 1994) are due to limit cycles, which can be broken up by using a deterministic bit flipping quantizer (Damera-Venkata and Evans, 2001). We review other variations on grayscale error diffusion to reduce false textures in shadow and highlight regions, including green noise halftoning Levien, 1993) and tone-dependent error diffusion (Li and Allebach, 2002). We then discuss color error diffusion in several forms: color plane separable (Kolpatzik and Bouman, 1992); vector quantization (Shaked <i>et al</i>. 1996); green noise extensions (Lau <i>et al</i>. 2000); and matrix-valued error filters (Damera-Venkata and Evans, 2001). We conclude with open research problems.
We present a solution to a complex multi-tone transient detection problem to illustrate the integrated use of symbolic and numeric processing techniques which are supported by well-established underlying models. Examples of such models include synchronous dataflow for numeric processing and the blackboard paradigm for symbolic heuristic search. Our transient detection solution serves to emphasize the importance of developing system design methods and tools which can support the integrated use of well- established symbolic and numerical models of computation. Recently, we incorporated a blackboard-based model of computation underlying the Integrated Processing and Understanding of Signals (IPUS) paradigm into a system-level design environment for numeric processing called Ptolemy. Using the IPUS/Ptolemy environment, we are implementing our solution to the multi-tone transient detection problem.
Proc. SPIE. 2563, Advanced Signal Processing Algorithms
KEYWORDS: Mathematical modeling, Data modeling, Computing systems, Computer programming, Computer simulations, Telecommunications, Signal processing, Optimization (mathematics), Systems modeling, Information operations
This paper examines some of the roles that symbolic computation plays in assisting system- level simulation and design. By symbolic computation, we mean programs like Mathematica that perform symbolic algebra and apply transformation rules based on algebraic identities. At a behavioral level, symbolic computation can compute parameters, generate new models, and optimize parameter settings. At the synthesis level, symbolic computation can work in tandem with synthesis tools to rewrite cascade and parallel combinations on components in sub- systems to meet design constraints. Symbolic computation represents one type of tool that may be invoked in the complex flow of the system design process. The paper discusses the qualities that a formal infrastructure for managing system design should have. The paper also describes an implementation of this infrastructure called DesignMaker, implemented in the Ptolemy environment, which manages the flow of tool invocations in an efficient manner using a graphical file dependency mechanism.