Abstract: In this work a surgical simulator is produced which
enables a training otologist to conduct a virtual, real-time prosthetic
insertion. The simulator provides the Ear, Nose and Throat surgeon
with real-time visual and haptic responses during virtual cochlear
implantation into a 3D model of the human Scala Tympani (ST). The
parametric model is derived from measured data as published in the
literature and accounts for human morphological variance, such as
differences in cochlear shape, enabling patient-specific pre- operative
assessment. Haptic modeling techniques use real physical data and
insertion force measurements, to develop a force model which
mimics the physical behavior of an implant as it collides with the ST
walls during an insertion. Output force profiles are acquired from the
insertion studies conducted in the work, to validate the haptic model.
The simulator provides the user with real-time, quantitative insertion
force information and associated electrode position as user inserts the
virtual implant into the ST model. The information provided by this
study may also be of use to implant manufacturers for design
enhancements as well as for training specialists in optimal force
administration, using the simulator. The paper reports on the methods
for anatomical modeling and haptic algorithm development, with
focus on simulator design, development, optimization and validation.
The techniques may be transferrable to other medical applications
that involve prosthetic device insertions where user vision is
obstructed.
Abstract: Color constancy algorithms are generally based on the
simplified assumption about the spectral distribution or the reflection
attributes of the scene surface. However, in reality, these assumptions
are too restrictive. The methodology is proposed to extend existing
algorithm to applying color constancy locally to image patches rather
than globally to the entire images.
In this paper, a method based on low-level image features using
superpixels is proposed. Superpixel segmentation partition an image
into regions that are approximately uniform in size and shape. Instead
of using entire pixel set for estimating the illuminant, only superpixels
with the most valuable information are used. Based on large scale
experiments on real-world scenes, it can be derived that the estimation
is more accurate using superpixels than when using the entire image.
Abstract: Short term electricity demand forecasts are required
by power utilities for efficient operation of the power grid. In a
competitive market environment, suppliers and large consumers also
require short term forecasts in order to estimate their energy
requirements in advance. Electricity demand is influenced (among
other things) by the day of the week, the time of year and special
periods and/or days such as Ramadhan, all of which must be
identified prior to modelling. This identification, known as day-type
identification, must be included in the modelling stage either by
segmenting the data and modelling each day-type separately or by
including the day-type as an input. Day-type identification is the
main focus of this paper. A Kohonen map is employed to identify the
separate day-types in Algerian data.
Abstract: In projects like waterpower, transportation and
mining, etc., proving up the rock-mass structure and hidden tectonic
to estimate the geological body-s activity is very important.
Integrating the seismic results, drilling and trenching data,
CSAMT method was carried out at a planning dame site in southwest
China to evaluate the stability of a deformation. 2D and imitated 3D
inversion resistivity results of CSAMT method were analyzed. The
results indicated that CSAMT was an effective method for defining
an outline of deformation body to several hundred meters deep; the
Lung Pan Deformation was stable in natural conditions; but uncertain
after the future reservoir was impounded.
This research presents a good case study of the fine surveying and
research on complex geological structure and hidden tectonic in
engineering project.
Abstract: Bone remodeling occurs by the balanced action of
bone resorbing osteoclasts (OC) and bone-building osteoblasts.
Increased bone resorption by excessive OC activity contributes
to malignant and non-malignant diseases including osteoporosis.
To study OC differentiation and function, OC formed in
in vitro cultures are currently counted manually, a tedious
procedure which is prone to inter-observer differences. Aiming
for an automated OC-quantification system, classification of
OC and precursor cells was done on fluorescence microscope
images based on the distinct appearance of fluorescent nuclei.
Following ellipse fitting to nuclei, a combination of eight
features enabled clustering of OC and precursor cell nuclei.
After evaluating different machine-learning techniques, LOGREG
achieved 74% correctly classified OC and precursor cell
nuclei, outperforming human experts (best expert: 55%). In
combination with the automated detection of total cell areas,
this system allows to measure various cell parameters and most
importantly to quantify proteins involved in osteoclastogenesis.
Abstract: In this paper, a new technique for fast painting with
different colors is presented. The idea of painting relies on applying
masks with different colors to the background. Fast painting is
achieved by applying these masks in the frequency domain instead of
spatial (time) domain. New colors can be generated automatically as a
result from the cross correlation operation. This idea was applied
successfully for faster specific data (face, object, pattern, and code)
detection using neural algorithms. Here, instead of performing cross
correlation between the input input data (e.g., image, or a stream of
sequential data) and the weights of neural networks, the cross
correlation is performed between the colored masks and the
background. Furthermore, this approach is developed to reduce the
computation steps required by the painting operation. The principle of
divide and conquer strategy is applied through background
decomposition. Each background is divided into small in size subbackgrounds
and then each sub-background is processed separately by
using a single faster painting algorithm. Moreover, the fastest painting
is achieved by using parallel processing techniques to paint the
resulting sub-backgrounds using the same number of faster painting
algorithms. In contrast to using only faster painting algorithm, the
speed up ratio is increased with the size of the background when using
faster painting algorithm and background decomposition. Simulation
results show that painting in the frequency domain is faster than that in
the spatial domain.
Abstract: In gas lifted oil fields, the lift gas should be distributed optimally among the wells which share gas from a common source to maximize total oil production. One of the objectives of the paper is to show that a linear MPC consisting of a control objective and an economic objective can be used both as an optimizer and a controller for gas lifted systems. The MPC is based on linearized model of the oil field developed from first principles modeling. Simulation results show that the total oil production is increased by 3.4%. Difficulties in accurately measuring the bottom hole pressure using sensors in harsh operating conditions can be resolved by using an Unscented Kalman Filter (UKF) for estimation. In oil fields where input disturbance (total supply of gas) is not measured, UKF can also be used for disturbance estimation. Increased total oil production due to optimization leads to increased profit.
Abstract: Integration of system process information obtained
through an image processing system with an evolving knowledge
database to improve the accuracy and predictability of wear particle
analysis is the main focus of the paper. The objective is to automate
intelligently the analysis process of wear particle using classification
via self organizing maps. This is achieved using relationship
measurements among corresponding attributes of various
measurements for wear particle. Finally, visualization technique is
proposed that helps the viewer in understanding and utilizing these
relationships that enable accurate diagnostics.
Abstract: Integration of system process information obtained
through an image processing system with an evolving knowledge
database to improve the accuracy and predictability of wear debris
analysis is the main focus of the paper. The objective is to automate
intelligently the analysis process of wear particle using classification
via self-organizing maps. This is achieved using relationship
measurements among corresponding attributes of various
measurements for wear debris. Finally, visualization technique is
proposed that helps the viewer in understanding and utilizing these
relationships that enable accurate diagnostics.
Abstract: Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.
Abstract: In this paper a non-parametric statistical pattern recognition algorithm for the problem of credit scoring will be presented. The proposed algorithm is based on a clustering k- means algorithm and allows for the determination of subclasses of homogenous elements in the data. The algorithm will be tested on two benchmark datasets and its performance compared with other well known pattern recognition algorithm for credit scoring.
Abstract: In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.
Abstract: In this paper we present an efficient system for
independent speaker speech recognition based on neural network
approach. The proposed architecture comprises two phases: a
preprocessing phase which consists in segmental normalization and
features extraction and a classification phase which uses neural
networks based on nonparametric density estimation namely the
general regression neural network (GRNN). The relative
performances of the proposed model are compared to the similar
recognition systems based on the Multilayer Perceptron (MLP), the
Recurrent Neural Network (RNN) and the well known Discrete
Hidden Markov Model (HMM-VQ) that we have achieved also.
Experimental results obtained with Arabic digits have shown that the
use of nonparametric density estimation with an appropriate
smoothing factor (spread) improves the generalization power of the
neural network. The word error rate (WER) is reduced significantly
over the baseline HMM method. GRNN computation is a successful
alternative to the other neural network and DHMM.
Abstract: Knowing the geometrical object pose of products in manufacturing line before robot manipulation is required and less time consuming for overall shape measurement. In order to perform it, the information of shape representation and matching of objects is become required. Objects are compared with its descriptor that conceptually subtracted from each other to form scalar metric. When the metric value is smaller, the object is considered closed to each other. Rotating the object from static pose in some direction introduce the change of value in scalar metric value of boundary information after feature extraction of related object. In this paper, a proposal method for indexing technique for retrieval of 3D geometrical models based on similarity between boundaries shapes in order to measure 3D CAD object pose using object shape feature matching for Computer Aided Testing (CAT) system in production line is proposed. In experimental results shows the effectiveness of proposed method.
Abstract: The paper makes part from a complex research project
on Romanian Grey Steppe, a unique breed in terms of biological and
cultural-historical importance, on the verge of extinction and which
has been included in a preservation programme of genetic resources
from Romania. The study of genetic polymorphism of protean
fractions, especially kappa-casein, and the genotype relations of
these lactoproteins with some quantitative and qualitative features of
milk yield represents a current theme and a novelty for this breed. In
the estimation of the genetic parameters we used R.E.M.L.
(Restricted Maximum Likelihood) method.
The main lactoprotein from milk, kappa - casein (K-cz),
characterized in the specialized literature as a feature having a high
degree of hereditary transmission, behaves as such in the nucleus under
study, a value also confirmed by the heritability coefficient (h2 = 0.57
%). We must mention the medium values for milk and fat quantity
(h2=0.26, 0.29 %) and the fat and protein percentage from milk
having a high hereditary influence h2 = 0.71 - 0.63 %.
Correlations between kappa-casein and the milk quantity are
negative and strong. Between kappa-casein and other qualitative
features of milk (fat content 0.58-0.67 % and protein content 0.77-
0.87%), there are positive and very strong correlations. At the same
time, between kappa-casein and β casein (β-cz), β lactoglobulin (β-
lg) respectively, correlations are positive having high values (0.37 –
0.45 %), indicating the same causes and determining factors for the
two groups of features.
Abstract: A dynamic stall-corrected Blade Element-Momentum algorithm based on a hybrid polar is validated through the comparison with Sandia experimental measurements on a 5-m diameter wind turbine of Troposkien shape. Different dynamic stall models are evaluated. The numerical predictions obtained using the extended aerodynamic coefficients provided by both Sheldal and Klimas and Raciti Castelli et al. are compared to experimental data, determining the potential of the hybrid database for the numerical prediction of vertical-axis wind turbine performances.
Abstract: The existence of maximal durations drastically modifies the performance evaluation in Discrete Event Systems (DES). The same particularity may be found on systems where the associated constraints do not concern the time. For example weight measures, in chemical industry, are used in order to control the quantity of consumed raw materials. This parameter also takes a fundamental part in the product quality as the correct transformation process is based upon a given percentage of each essence. Weight regulation therefore increases the global productivity of the system by decreasing the quantity of rejected products. In this paper we present an approach based on mixing different characteristics theories, the fuzzy system and Petri net system to describe the behaviour. An industriel application on a tobacco manufacturing plant, where the critical parameter is the weight is presented as an illustration.
Abstract: Fast delay estimation methods, as opposed to
simulation techniques, are needed for incremental performance
driven layout synthesis. On-chip inductive effects are becoming
predominant in deep submicron interconnects due to increasing clock
speed and circuit complexity. Inductance causes noise in signal
waveforms, which can adversely affect the performance of the circuit
and signal integrity. Several approaches have been put forward which
consider the inductance for on-chip interconnect modelling. But for
even much higher frequency, of the order of few GHz, the shunt
dielectric lossy component has become comparable to that of other
electrical parameters for high speed VLSI design. In order to cope up
with this effect, on-chip interconnect has to be modelled as
distributed RLCG line. Elmore delay based methods, although
efficient, cannot accurately estimate the delay for RLCG interconnect
line. In this paper, an accurate analytical delay model has been
derived, based on first and second moments of RLCG
interconnection lines. The proposed model considers both the effect
of inductance and conductance matrices. We have performed the
simulation in 0.18μm technology node and an error of as low as less
as 5% has been achieved with the proposed model when compared to
SPICE. The importance of the conductance matrices in interconnect
modelling has also been discussed and it is shown that if G is
neglected for interconnect line modelling, then it will result an delay
error of as high as 6% when compared to SPICE.
Abstract: Fair share objective has been included into the goaloriented
parallel computer job scheduling policy recently. However,
the previous work only presented the overall scheduling performance.
Thus, the per-user performance of the policy is still lacking. In this
work, the details of per-user fair share performance under the
Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair
share priority backfill policy namely RelShare(1d) is also studied.
The performance of all policies is collected using an event-driven
simulator with three real job traces as input. The experimental results
show that the high demand users are usually benefited under most
policies because their jobs are large or they have a lot of jobs. In the
large job case, one job executed may result in over-share during that
period. In the other case, the jobs may be backfilled for
performances. However, the users with a mixture of jobs may suffer
because if the smaller jobs are executing the priority of the remaining
jobs from the same user will be lower. Further analysis does not show
any significant impact of users with a lot of jobs or users with a large
runtime approximation error.
Abstract: In the past decade, because of wide applications of
hybrid systems, many researchers have considered modeling and
control of these systems. Since switching systems constitute an
important class of hybrid systems, in this paper a method for optimal
control of linear switching systems is described. The method is also
applied on the two-tank system which is a much appropriate system
to analyze different modeling and control techniques of hybrid
systems. Simulation results show that, in this method, the goals of
control and also problem constraints can be satisfied by an
appropriate selection of cost function.