Abstract: This paper treats different aspects of entropy measure
in classical information theory and statistical quantum mechanics, it
presents the possibility of extending the definition of Von Neumann
entropy to image and array processing. In the first part, we generalize
the quantum entropy using singular values of arbitrary rectangular
matrices to measure the randomness and the quality of denoising
operation, this new definition of entropy can be implemented to
compare the performance analysis of filtering methods. In the second
part, we apply the concept of pure state in quantum formalism
to generalize the maximum entropy method for narrowband and
farfield source localization problem. Several computer simulation
results are illustrated to demonstrate the effectiveness of the proposed
techniques.
Abstract: In this paper, we present a new segmentation approach
for focal liver lesions in contrast enhanced ultrasound imaging. This
approach, based on a two-cluster Fuzzy C-Means methodology,
considers type-II fuzzy sets to handle uncertainty due to the image
modality (presence of speckle noise, low contrast, etc.), and to
calculate the optimum inter-cluster threshold. Fine boundaries are
detected by a local recursive merging of ambiguous pixels. The
method has been tested on a representative database. Compared to
both Otsu and type-I Fuzzy C-Means techniques, the proposed
method significantly reduces the segmentation errors.
Abstract: Speaker Identification (SI) is the task of establishing
identity of an individual based on his/her voice characteristics. The SI
task is typically achieved by two-stage signal processing: training and
testing. The training process calculates speaker specific feature
parameters from the speech and generates speaker models
accordingly. In the testing phase, speech samples from unknown
speakers are compared with the models and classified. Even though
performance of speaker identification systems has improved due to
recent advances in speech processing techniques, there is still need of
improvement. In this paper, a Closed-Set Tex-Independent Speaker
Identification System (CISI) based on a Multiple Classifier System
(MCS) is proposed, using Mel Frequency Cepstrum Coefficient
(MFCC) as feature extraction and suitable combination of vector
quantization (VQ) and Gaussian Mixture Model (GMM) together
with Expectation Maximization algorithm (EM) for speaker
modeling. The use of Voice Activity Detector (VAD) with a hybrid
approach based on Short Time Energy (STE) and Statistical
Modeling of Background Noise in the pre-processing step of the
feature extraction yields a better and more robust automatic speaker
identification system. Also investigation of Linde-Buzo-Gray (LBG)
clustering algorithm for initialization of GMM, for estimating the
underlying parameters, in the EM step improved the convergence rate
and systems performance. It also uses relative index as confidence
measures in case of contradiction in identification process by GMM
and VQ as well. Simulation results carried out on voxforge.org
speech database using MATLAB highlight the efficacy of the
proposed method compared to earlier work.
Abstract: In this article, we deal with a variant of the classical
course timetabling problem that has a practical application in many
areas of education. In particular, in this paper we are interested in
high schools remedial courses. The purpose of such courses is to
provide under-prepared students with the skills necessary to succeed
in their studies. In particular, a student might be under prepared in
an entire course, or only in a part of it. The limited availability
of funds, as well as the limited amount of time and teachers at
disposal, often requires schools to choose which courses and/or which
teaching units to activate. Thus, schools need to model the training
offer and the related timetabling, with the goal of ensuring the
highest possible teaching quality, by meeting the above-mentioned
financial, time and resources constraints. Moreover, there are some
prerequisites between the teaching units that must be satisfied. We
first present a Mixed-Integer Programming (MIP) model to solve
this problem to optimality. However, the presence of many peculiar
constraints contributes inevitably in increasing the complexity of
the mathematical model. Thus, solving it through a general-purpose
solver may be performed for small instances only, while solving
real-life-sized instances of such model requires specific techniques
or heuristic approaches. For this purpose, we also propose a heuristic
approach, in which we make use of a fast constructive procedure
to obtain a feasible solution. To assess our exact and heuristic
approaches we perform extensive computational results on both
real-life instances (obtained from a high school in Lecce, Italy) and
randomly generated instances. Our tests show that the MIP model is
never solved to optimality, with an average optimality gap of 57%.
On the other hand, the heuristic algorithm is much faster (in about the
50% of the considered instances it converges in approximately half of
the time limit) and in many cases allows achieving an improvement
on the objective function value obtained by the MIP model. Such an
improvement ranges between 18% and 66%.
Abstract: Localization of nodes is one of the key issues of
Wireless Sensor Network (WSN) that gained a wide attention in
recent years. The existing localization techniques can be generally
categorized into two types: range-based and range-free. Compared
with rang-based schemes, the range-free schemes are more costeffective,
because no additional ranging devices are needed. As a
result, we focus our research on the range-free schemes. In this paper
we study three types of range-free location algorithms to compare the
localization error and energy consumption of each one. Centroid
algorithm requires a normal node has at least three neighbor anchors,
while DV-hop algorithm doesn’t have this requirement. The third
studied algorithm is the amorphous algorithm similar to DV-Hop
algorithm, and the idea is to calculate the hop distance between two
nodes instead of the linear distance between them. The simulation
results show that the localization accuracy of the amorphous
algorithm is higher than that of other algorithms and the energy
consumption does not increase too much.
Abstract: The purpose of this study is the discrimination of 28
postmenopausal with osteoporotic femoral fractures from an agematched
control group of 28 women using texture analysis based on
fractals. Two pre-processing approaches are applied on radiographic
images; these techniques are compared to highlight the choice of the
pre-processing method. Furthermore, the values of the fractal
dimension are compared to those of the fractal signature in terms of
the classification of the two populations. In a second analysis, the
BMD measure at proximal femur was compared to the fractal
analysis, the latter, which is a non-invasive technique, allowed a
better discrimination; the results confirm that the fractal analysis of
texture on calcaneus radiographs is able to discriminate osteoporotic
patients with femoral fracture from controls. This discrimination was
efficient compared to that obtained by BMD alone. It was also
present in comparing subgroups with overlapping values of BMD.
Abstract: Background: Muscle Energy Techniques (MET) have
been widely used by manual therapists over the past years, but still
limited research validated its use and there was limited evidence to
substantiate the theories used to explain its effects. Objective: To
investigate the effect of Muscle Energy Technique (MET) on anterior
pelvic tilt in patients with lumbar spondylosis. Design: Randomized
controlled trial. Subjects: Thirty patients with anterior pelvic tilt from
both sexes were involved, aged between 35 to 50 years old and they
were divided into MET and control groups with 15 patients in each.
Methods: All patients received 3sessions/week for 4 weeks where the
study group received MET, Ultrasound and Infrared, and the control
group received U.S and I.R only. Pelvic angle was measured by
palpation meter, pain severity by the visual analogue scale and
functional disabilities by the Oswestry disability index. Results: Both
groups showed significant improvement in all measured variables.
The MET group was significantly better than the control group in
pelvic angle, pain severity, and functional disability as p-value were
(0.001, 0.0001, 0.0001) respectively. Conclusion and implication: the
study group fulfilled greater improvement in all measured variables
than the control group which implies that application of MET in
combination with U.S and I.R were more effective in improving
pelvic tilting angle, pain severity and functional disabilities than
using electrotherapy only.
Abstract: Particle size distribution, the most important
characteristics of aerosols, is obtained through electrical
characterization techniques. The dynamics of charged nanoparticles
under the influence of electric field in Electrical Mobility
Spectrometer (EMS) reveals the size distribution of these particles.
The accuracy of this measurement is influenced by flow conditions,
geometry, electric field and particle charging process, therefore by
the transfer function (transfer matrix) of the instrument. In this work,
a wire-cylinder corona charger was designed and the combined fielddiffusion
charging process of injected poly-disperse aerosol particles
was numerically simulated as a prerequisite for the study of a
multichannel EMS. The result, a cloud of particles with no uniform
charge distribution, was introduced to the EMS. The flow pattern and
electric field in the EMS were simulated using Computational Fluid
Dynamics (CFD) to obtain particle trajectories in the device and
therefore to calculate the reported signal by each electrometer.
According to the output signals (resulted from bombardment of
particles and transferring their charges as currents), we proposed a
modification to the size of detecting rings (which are connected to
electrometers) in order to evaluate particle size distributions more
accurately. Based on the capability of the system to transfer
information contents about size distribution of the injected particles,
we proposed a benchmark for the assessment of optimality of the
design. This method applies the concept of Von Neumann entropy
and borrows the definition of entropy from information theory
(Shannon entropy) to measure optimality. Entropy, according to the
Shannon entropy, is the ''average amount of information contained in
an event, sample or character extracted from a data stream''.
Evaluating the responses (signals) which were obtained via various
configurations of detecting rings, the best configuration which gave
the best predictions about the size distributions of injected particles,
was the modified configuration. It was also the one that had the
maximum amount of entropy. A reasonable consistency was also
observed between the accuracy of the predictions and the entropy
content of each configuration. In this method, entropy is extracted
from the transfer matrix of the instrument for each configuration.
Ultimately, various clouds of particles were introduced to the
simulations and predicted size distributions were compared to the
exact size distributions.
Abstract: The purpose of this project is to propose a quick and
environmentally friendly alternative to measure the quality of oils
used in food industry. There is evidence that repeated and
indiscriminate use of oils in food processing cause physicochemical
changes with formation of potentially toxic compounds that can
affect the health of consumers and cause organoleptic changes. In
order to assess the quality of oils, non-destructive optical techniques
such as Interferometry offer a rapid alternative to the use of reagents,
using only the interaction of light on the oil. Through this project, we
used interferograms of samples of oil placed under different heating
conditions to establish the changes in their quality. These
interferograms were obtained by means of a Mach-Zehnder
Interferometer using a beam of light from a HeNe laser of 10mW at
632.8nm. Each interferogram was captured, analyzed and measured
full width at half-maximum (FWHM) using the software from
Amcap and ImageJ. The total of FWHMs was organized in three
groups. It was observed that the average obtained from each of the
FWHMs of group A shows a behavior that is almost linear, therefore
it is probable that the exposure time is not relevant when the oil is
kept under constant temperature. Group B exhibits a slight
exponential model when temperature raises between 373 K and 393
K. Results of the t-Student show a probability of 95% (0.05) of the
existence of variation in the molecular composition of both samples.
Furthermore, we found a correlation between the Iodine Indexes
(Physicochemical Analysis) and the Interferograms (Optical
Analysis) of group C. Based on these results, this project highlights
the importance of the quality of the oils used in food industry and
shows how Interferometry can be a useful tool for this purpose.
Abstract: Voting algorithms are extensively used to make
decisions in fault tolerant systems where each redundant module
gives inconsistent outputs. Popular voting algorithms include
majority voting, weighted voting, and inexact majority voters. Each
of these techniques suffers from scenarios where agreements do not
exist for the given voter inputs. This has been successfully overcome
in literature using fuzzy theory. Our previous work concentrated on a
neuro-fuzzy algorithm where training using the neuro system
substantially improved the prediction result of the voting system.
Weight training of Neural Network is sub-optimal. This study
proposes to optimize the weights of the Neural Network using
Artificial Bee Colony algorithm. Experimental results show the
proposed system improves the decision making of the voting
algorithms.
Abstract: The quantitative study of cell mechanics is of
paramount interest, since it regulates the behaviour of the living cells
in response to the myriad of extracellular and intracellular
mechanical stimuli. The novel experimental techniques together with
robust computational approaches have given rise to new theories and
models, which describe cell mechanics as combination of
biomechanical and biochemical processes. This review paper
encapsulates the existing continuum-based computational approaches
that have been developed for interpreting the mechanical responses of
living cells under different loading and boundary conditions. The
salient features and drawbacks of each model are discussed from both
structural and biological points of view. This discussion can
contribute to the development of even more precise and realistic
computational models of cell mechanics based on continuum
approaches or on their combination with microstructural approaches,
which in turn may provide a better understanding of
mechanotransduction in living cells.
Abstract: Accurate forecasting of fresh produce demand is one
the challenges faced by Small Medium Enterprise (SME)
wholesalers. This paper is an attempt to understand the cause for the
high level of variability such as weather, holidays etc., in demand of
SME wholesalers. Therefore, understanding the significance of
unidentified factors may improve the forecasting accuracy. This
paper presents the current literature on the factors used to predict
demand and the existing forecasting techniques of short shelf life
products. It then investigates a variety of internal and external
possible factors, some of which is not used by other researchers in the
demand prediction process. The results presented in this paper are
further analysed using a number of techniques to minimize noise in
the data. For the analysis past sales data (January 2009 to May 2014)
from a UK based SME wholesaler is used and the results presented
are limited to product ‘Milk’ focused on café’s in derby. The
correlation analysis is done to check the dependencies of variability
factor on the actual demand. Further PCA analysis is done to
understand the significance of factors identified using correlation.
The PCA results suggest that the cloud cover, weather summary and
temperature are the most significant factors that can be used in
forecasting the demand. The correlation of the above three factors
increased relative to monthly and becomes more stable compared to
the weekly and daily demand.
Abstract: Despite the advances made in various new
technologies, application of these technologies for agriculture still
remains a formidable task, as it involves integration of diverse
domains for monitoring the different process involved in agricultural
management. Advances in ambient intelligence technology represents
one of the most powerful technology for increasing the yield of
agricultural crops and to mitigate the impact of water scarcity,
climatic change and methods for managing pests, weeds and diseases.
This paper proposes a GPS-assisted, machine to machine solutions
that combine information collected by multiple sensors for the
automated management of paddy crops. To maintain the economic
viability of paddy cultivation, the various techniques used in
agriculture are discussed and a novel system which uses ambient
intelligence technique is proposed in this paper. The ambient
intelligence based agricultural system gives a great scope.
Abstract: Background subtraction and temporal difference are
often used for moving object detection in video. Both approaches are
computationally simple and easy to be deployed in real-time image
processing. However, while the background subtraction is highly
sensitive to dynamic background and illumination changes, the
temporal difference approach is poor at extracting relevant pixels of
the moving object and at detecting the stopped or slowly moving
objects in the scene. In this paper, we propose a simple moving object
detection scheme based on adaptive background subtraction and
temporal difference exploiting dynamic background updates. The
proposed technique consists of histogram equalization, a linear
combination of background and temporal difference, followed by the
novel frame-based and pixel-based background updating techniques.
Finally, morphological operations are applied to the output images.
Experimental results show that the proposed algorithm can solve the
drawbacks of both background subtraction and temporal difference
methods and can provide better performance than that of each method.
Abstract: Neural activity in the human brain starts from the
early stages of prenatal development. This activity or signals
generated by the brain are electrical in nature and represent not only
the brain function but also the status of the whole body. At the
present moment, three methods can record functional and
physiological changes within the brain with high temporal resolution
of neuronal interactions at the network level: the
electroencephalogram (EEG), the magnet oencephalogram (MEG),
and functional magnetic resonance imaging (fMRI); each of these has
advantages and shortcomings. EEG recording with a large number of
electrodes is now feasible in clinical practice. Multichannel EEG
recorded from the scalp surface provides very valuable but indirect
information about the source distribution. However, deep electrode
measurements yield more reliable information about the source
locations intracranial recordings and scalp EEG are used with the
source imaging techniques to determine the locations and strengths of
the epileptic activity. As a source localization method, Low
Resolution Electro-Magnetic Tomography (LORETA) is solved for
the realistic geometry based on both forward methods, the Boundary
Element Method (BEM) and the Finite Difference Method (FDM). In
this paper, we review the findings EEG- LORETA about epilepsy.
Abstract: Since 1920, the industry has almost completely
changed the rivets production techniques for the manufacture of
permanent welding join production of structures and manufacture of
other products. The welding arc is the process more widely used in
industries. This is accomplished by the heat of an electric arc which
melts the base metal while the molten metal droplets are transferred
through the arc to the welding pool, protected from the atmosphere
by a gas curtain. The GMAW (Gas metal arc welding) process is
influenced by variables such as: current, polarity, welding speed,
electrode: extension, position, moving direction; type of joint,
welder's ability, among others. It is remarkable that the knowledge
and control of these variables are essential for obtaining satisfactory
quality welds, knowing that are interconnected so that changes in one
of them requiring changes in one or more of the other to produce the
desired results. The optimum values are affected by the type of base
metal, the electrode composition, the welding position and the quality
requirements. Thus, this paper proposes a new methodology, adding
the variable vibration through a mechanism developed for GMAW
welding, in order to improve the mechanical and metallurgical
properties which does not affect the ability of the welder and enables
repeatability of the welds made. For confirmation metallographic
analysis and mechanical tests were made.
Abstract: Cemented carbides, owing to their excellent
mechanical properties, have been of immense interest in the field of
hard materials for the past few decades. A number of processing
techniques have been developed to obtain high quality carbide tools,
with a wide range of grain size depending on the application and
requirements. Microwave sintering is one of the heating processes,
which has been used to prepare a wide range of materials including
ceramics. A deep understanding of microwave sintering and its
contribution towards control of grain growth and on deformation of
the resulting carbide materials requires further studies and attention.
In addition, the effect of binder materials and their behavior during
microwave sintering is another area that requires clear understanding.
This review aims to focus on microwave sintering, providing
information of how the process works and what type of materials it is
best suited for. In addition, a closer look at some microwave sintered
Tungsten Carbide-Cobalt samples will be taken and discussed,
highlighting some of the key issues and challenges faced in this
research area.
Abstract: In recent years, multi-antenna techniques are being considered as a potential solution to increase the flow of future wireless communication systems. The objective of this article is to study the emission and reception system MIMO (Multiple Input Multiple Output), and present the different reception decoding techniques. First we will present the least complex technical, linear receivers such as the zero forcing equalizer (ZF) and minimum mean squared error (MMSE). Then a nonlinear technique called ordered successive cancellation of interferences (OSIC) and the optimal detector based on the maximum likelihood criterion (ML), finally, we simulate the associated decoding algorithms for MIMO system such as ZF, MMSE, OSIC and ML, thus a comparison of performance of these algorithms in MIMO context.
Abstract: Construction and reconstruction of settlements and
individual municipalities, environmental management and the
creation, deployment of the forces of production and building
transport and technical equipment requires a large expenditure of
material and human resources. That is why the economic aspects of
the majority decision in these planes built in the foreground and are
often decisive. Thereby but more serious is that the economic aspects
of the settlement, the creation and function remain in their whole,
unprocessed, and cannot speak of a set of individual techniques and
methods traditional indicators and experiments with new approaches.
This is true both at the level of the national economy, and in their
own urban designs. Still a few remain identified specific economic
shaping patterns of settlement and the less it is possible to speak of
their control. Also practical assessing economics of specific solutions
are often used non-apt indicators in addition to economics usually
identifies with the lowest acquisition cost or high-intensity land use
with little regard for functional efficiency and little studied much
higher operating and maintenance costs".
Abstract: Floorplanning plays a vital role in the physical design
process of Very Large Scale Integrated (VLSI) chips. It is an
essential design step to estimate the chip area prior to the optimized
placement of digital blocks and their interconnections. Since VLSI
floorplanning is an NP-hard problem, many optimization techniques
were adopted in the literature. In this work, a music-inspired
Harmony Search (HS) algorithm is used for the fixed die outline
constrained floorplanning, with the aim of reducing the total chip
area. HS draws inspiration from the musical improvisation process of
searching for a perfect state of harmony. Initially, B*-tree is used to
generate the primary floorplan for the given rectangular hard
modules and then HS algorithm is applied to obtain an optimal
solution for the efficient floorplan. The experimental results of the
HS algorithm are obtained for the MCNC benchmark circuits.