In recent years, real-time Magnetic Resonance Imaging (RT-MRI) has been used to acquire vocal tract data to support articulatory studies. The large amount of images resulting from these acquisitions needs to be processed and the resulting data analysed to extract articulatory features. This analysis is often performed by linguists and phoneticists and requires not only tools providing a high level exploration of the data, to gather insight over the different aspects of speech, but also a set of features to compare different vocal tract configurations in static and dynamic scenarios. In order to make the data available in a faster and systematic fashion, without the continuous direct involvement of image processing specialists, a framework is being developed to bridge the gap between the more technical aspects of raw data and the higher level analysis required by speech researchers. In its current state it already includes segmentation of the vocal tract, allows users to explore the different aspects of the acquired data using coordinated views, and provides support for vocal tract configuration comparison. Beyond the traditional method of visual comparison of vocal tract profiles, a quantitative method is proposed, considering relevant anatomical features, supported by an abstract representation of the data both for static and dynamic analysis.
In medical image processing and analysis it is often required to perform segmentation for quantitative measures
of extent, volume and shape.
The validation of new segmentation methods and tools usually implies comparing their various outputs among
themselves (or with a ground truth), using similarity metrics. Several such metrics are proposed in the literature
but it is important to select those which are relevant for a particular task as opposed to using all metrics and
therefore avoiding additional computational cost and redundancy.
A methodology is proposed which enables the assessment of how different similarity and discrepancy metrics
behave for a particular comparison and the selection of those which provide relevant data.
The complexity of a polygonal mesh is usually reduced by applying a simplification method, resulting in a similar
mesh having less vertices and faces. Although several such methods have been developed, only a few observer
studies are reported comparing the perceived quality of the simplified meshes, and it is not yet clear how the
choice of a given method, and the level of simplification achieved, influence the quality of the resulting mesh, as
perceived by the final users. Similar issues occur regarding other mesh processing methods such as smoothing.
Mesh quality indices are the obvious less costly alternative to user studies, but it is also not clear how they relate
to perceived quality, and which indices best describe the users behavior.
This paper describes on going work concerning the evaluation of perceived quality of polygonal meshes using
observer studies, while looking for a quality index which estimates user performance. In particular, given some
results obtained in previous studies, a new experimental protocol was designed and a study involving 55 users
was carried out, which allowed their validation, as well as further insight regarding mesh quality, as perceived
by human observers.
The complexity of a polygonal mesh model is usually reduced by applying a simplification method, resulting in
a similar mesh having less vertices and faces. Although several such methods have been developed, only a few
observer studies are reported comparing them regarding the perceived quality of the obtained simplified meshes,
and it is not yet clear how the choice of a given method, and the level of simplification achieved, influence the
quality of the resulting model, as perceived by the final users. Mesh quality indices are the obvious less costly
alternative to user studies, but it is also not clear how they relate to perceived quality, and which indices best
describe the users behavior.
Following on earlier work carried out by the authors, but only for mesh models of the lungs, a comparison
among the results of three simplification methods was performed through (1) quality indices and (2) a controlled
experiment involving 65 observers, for a set of five reference mesh models of different kinds. These were simplified
using two methods provided by the OpenMesh library - one using error quadrics, the other additionally using
a normal flipping criterion - and also by the widely used QSlim method, for two simplification levels: 50% and
20% of the original number of faces. The main goal was to ascertain whether the findings previously obtained
for lung models, through quality indices and a study with 32 observers, could be generalized to other types of
models and confirmed for a larger number of observers. Data obtained using the quality indices and the results
of the controlled experiment were compared and do confirm that some quality indices (e.g., geometric distance
and normal deviation, as well as a new proposed weighted index) can be used, in specific circumstances, as
reasonable estimators of the user perceived quality of mesh models.
Polygonal meshes are used in many application scenarios. Often the generated meshes are too complex not allowing
proper interaction, visualization or transmission through a network. To tackle this problem, simplification
methods can be used to generate less complex versions of those meshes.
For this purpose many methods have been proposed in the literature and it is of paramount importance that
each new method be compared with its predecessors, thus allowing quality assessment of the solution it provides.
This systematic evaluation of each new method requires tools which provide all the necessary features (ranging
from quality measures to visualization methods) to help users gain greater insight into the data.
This article presents the comparison of two simplification algorithms, NSA and QSlim, using PolyMeCo, a
tool which enhances the way users perform mesh analysis and comparison, by providing an environment where
several visualization options are available and can be used in a coordinated way.
Meshes are currently used to model objects, namely human organs and other structures. However, if they have a large number of triangles, their rendering times may not be adequate to allow interactive visualization, a mostly desirable feature in some diagnosis (or, more generally, decision) scenarios, where the choice of adequate views is important. In this case, a possible solution consists in showing a simplified version while the user interactively chooses the viewpoint and, then, a fully detailed version of the model to support its analysis. To tackle this problem, simplification methods can be used to generate less complex versions of meshes. While several simplification methods have been developed and reported in the literature, only a few studies compare them concerning the perceived quality of the obtained simplified meshes.
This work describes an experiment conducted with human observers in order to compare three different simplification methods used to simplify mesh models of the lungs. We intended to study if any of these methods allows a better-perceived quality for the same simplification rate.
A protocol was developed in order to measure these aspects. The results presented were obtained from 32 human observers. The comparison between the three mesh simplification methods was first performed through an Exploratory Data Analysis and the significance of this comparison was then established using other statistical methods. Moreover, the influence on the observers' performances of some other factors was also investigated.