Abstract: This paper deals with the synthesis of fuzzy state feedback controller of induction motor with optimal performance. First, the Takagi-Sugeno (T-S) fuzzy model is employed to approximate a non linear system in the synchronous d-q frame rotating with electromagnetic field-oriented. Next, a fuzzy controller is designed to stabilise the induction motor and guaranteed a minimum disturbance attenuation level for the closed-loop system. The gains of fuzzy control are obtained by solving a set of Linear Matrix Inequality (LMI). Finally, simulation results are given to demonstrate the controller-s effectiveness.
Abstract: SAD (Sum of Absolute Difference) algorithm is
heavily used in motion estimation which is computationally highly
demanding process in motion picture encoding. To enhance the
performance of motion picture encoding on a VLIW processor, an
efficient implementation of SAD algorithm on the VLIW processor is
essential. SAD algorithm is programmed as a nested loop with a
conditional branch. In VLIW processors, loop is usually optimized by
software pipelining, but researches on optimal scheduling of software
pipelining for nested loops, especially nested loops with conditional
branches are rare. In this paper, we propose an optimal scheduling and
implementation of SAD algorithm with conditional branch on a VLIW
DSP processor. The proposed optimal scheduling first transforms the
nested loop with conditional branch into a single loop with conditional
branch with consideration of full utilization of ILP capability of the
VLIW processor and realization of earlier escape from the loop. Next,
the proposed optimal scheduling applies a modulo scheduling
technique developed for single loop. Based on this optimal scheduling
strategy, optimal implementation of SAD algorithm on TMS320C67x,
a VLIW DSP is presented. Through experiments on TMS320C6713
DSK, it is shown that H.263 encoder with the proposed SAD
implementation performs better than other H.263 encoder with other
SAD implementations, and that the code size of the optimal SAD
implementation is small enough to be appropriate for embedded
environments.
Abstract: A clustering is process to identify a homogeneous
groups of object called as cluster. Clustering is one interesting topic
on data mining. A group or class behaves similarly characteristics.
This paper discusses a robust clustering process for data images with
two reduction dimension approaches; i.e. the two dimensional
principal component analysis (2DPCA) and principal component
analysis (PCA). A standard approach to overcome this problem is
dimension reduction, which transforms a high-dimensional data into
a lower-dimensional space with limited loss of information. One of
the most common forms of dimensionality reduction is the principal
components analysis (PCA). The 2DPCA is often called a variant of
principal component (PCA), the image matrices were directly treated
as 2D matrices; they do not need to be transformed into a vector so
that the covariance matrix of image can be constructed directly using
the original image matrices. The decomposed classical covariance
matrix is very sensitive to outlying observations. The objective of
paper is to compare the performance of robust minimizing vector
variance (MVV) in the two dimensional projection PCA (2DPCA)
and the PCA for clustering on an arbitrary data image when outliers
are hiden in the data set. The simulation aspects of robustness and
the illustration of clustering images are discussed in the end of
paper
Abstract: A numerical investigation has carried out to understand the melting characteristics of phase change material (PCM) in a fin type latent heat storage with the addition of embedded aluminum spiral fillers. It is known that melting performance of PCM can be significantly improved by increasing the number of embedded metallic fins in the latent heat storage system but to certain values where only lead to small improvement in heat transfer rate. Hence, adding aluminum spiral fillers within the fin gap can be an option to improve heat transfer internally. This paper presents extensive computational visualizations on the PCM melting patterns of the proposed fin-spiral fillers configuration. The aim of this investigation is to understand the PCM-s melting behaviors by observing the natural convection currents movement and melting fronts formation. Fluent 6.3 simulation software was utilized in producing twodimensional visualizations of melting fractions, temperature distributions and flow fields to illustrate the melting process internally. The results show that adding aluminum spiral fillers in Fin type latent heat storage can promoted small but more active natural convection currents and improve melting of PCM.
Abstract: In managing healthcare logistics, cost is not the only
factor to be considered. The level of items- criticality used in patient
care services plays an important role as well. A stock-out incident of
a high critical item could threaten a patient's life. In this paper, the
DMAIC (Define-Measure-Analyze-Improve-Control) methodology is
used to drive improvement projects based on customer driven critical
to quality characteristics at a Jordanian hospital. This paper shows
how the application of Six Sigma improves the performance of the
case hospital logistics system by reducing the number of stock-out
incidents.
Abstract: The aim of this research is to develop a fast and
reliable surveillance system based on a personal digital assistant
(PDA) device. This is to extend the capability of the device to detect
moving objects which is already available in personal computers.
Secondly, to compare the performance between Background
subtraction (BS) and Temporal Frame Differencing (TFD) techniques
for PDA platform as to which is more suitable. In order to reduce
noise and to prepare frames for the moving object detection part,
each frame is first converted to a gray-scale representation and then
smoothed using a Gaussian low pass filter. Two moving object
detection schemes i.e., BS and TFD have been analyzed. The
background frame is updated by using Infinite Impulse Response
(IIR) filter so that the background frame is adapted to the varying
illuminate conditions and geometry settings. In order to reduce the
effect of noise pixels resulting from frame differencing
morphological filters erosion and dilation are applied. In this
research, it has been found that TFD technique is more suitable for
motion detection purpose than the BS in term of speed. On average
TFD is approximately 170 ms faster than the BS technique
Abstract: Brain Computer Interface (BCI) has been recently
increased in research. Functional Near Infrared Spectroscope (fNIRs)
is one the latest technologies which utilize light in the near-infrared
range to determine brain activities. Because near infrared technology
allows design of safe, portable, wearable, non-invasive and wireless
qualities monitoring systems, fNIRs monitoring of brain
hemodynamics can be value in helping to understand brain tasks. In
this paper, we present results of fNIRs signal analysis indicating that
there exist distinct patterns of hemodynamic responses which
recognize brain tasks toward developing a BCI. We applied two
different mathematics tools separately, Wavelets analysis for
preprocessing as signal filters and feature extractions and Neural
networks for cognition brain tasks as a classification module. We
also discuss and compare with other methods while our proposals
perform better with an average accuracy of 99.9% for classification.
Abstract: Need for an appropriate system of evaluating students-
educational developments is a key problem to achieve the predefined
educational goals. Intensity of the related papers in the last years; that
tries to proof or disproof the necessity and adequacy of the students
assessment; is the corroborator of this matter. Some of these studies
tried to increase the precision of determining question weights in
scientific examinations. But in all of them there has been an attempt
to adjust the initial question weights while the accuracy and precision
of those initial question weights are still under question. Thus In
order to increase the precision of the assessment process of students-
educational development, the present study tries to propose a new
method for determining the initial question weights by considering
the factors of questions like: difficulty, importance and complexity;
and implementing a combined method of PROMETHEE and fuzzy
analytic network process using a data mining approach to improve
the model-s inputs. The result of the implemented case study proves
the development of performance and precision of the proposed
model.
Abstract: This paper represents an investigation on how exploiting multiple transmit antennas by OFDM based wireless LAN subscribers can mitigate physical layer error rate. Then by comparing the Wireless LANs that utilize spatial diversity techniques with the conventional ones it will reveal how PHY and TCP throughputs behaviors are ameliorated. In the next step it will assess the same issues based on a cellular context operation which is mainly introduced as an innovated solution that beside a multi cell operation scenario benefits spatio-temporal signaling schemes as well. Presented simulations will shed light on the improved performance of the wide range and high quality wireless LAN services provided by the proposed approach.
Abstract: Building maintenance plays an important role among other activities in building operation. Building defect and damages are part of the building maintenance 'bread and butter' as their input indicated in the building inspection is very much justified, particularly as to determine the building performance. There will be no escape route or short cut from building maintenance work. This study attempts to identify a competitive performance that translates the Critical Success Factor achievements and satisfactorily meet the university-s expectation. The quality and efficiency of maintenance management operation of building depends, to some extent, on the building condition information, the expectation from the university sector and the works carried out for each maintenance activity. This paper reviews the critical success factor in building maintenance management practice for university sectors from four (4) perspectives which include (1) customer (2) internal processes (3) financial and (4) learning and growth perspective. The enhancement of these perspectives is capable to reach the maintenance management goal for a better living environment in university campus.
Abstract: Traditional wind tunnel models are meticulously machined from metal in a process that can take several months. While very precise, the manufacturing process is too slow to assess a new design's feasibility quickly. Rapid prototyping technology makes this concurrent study of air vehicle concepts via computer simulation and in the wind tunnel possible. This paper described the Affects layer thickness models product with rapid prototyping on Aerodynamic Coefficients for Constructed wind tunnel testing models. Three models were evaluated. The first model was a 0.05mm layer thickness and Horizontal plane 0.1μm (Ra) second model was a 0.125mm layer thickness and Horizontal plane 0.22μm (Ra) third model was a 0.15mm layer thickness and Horizontal plane 4.6μm (Ra). These models were fabricated from somos 18420 by a stereolithography (SLA). A wing-body-tail configuration was chosen for the actual study. Testing covered the Mach range of Mach 0.3 to Mach 0.9 at an angle-of-attack range of -2° to +12° at zero sideslip. Coefficients of normal force, axial force, pitching moment, and lift over drag are shown at each of these Mach numbers. Results from this study show that layer thickness does have an effect on the aerodynamic characteristics in general; the data differ between the three models by fewer than 5%. The layer thickness does have more effect on the aerodynamic characteristics when Mach number is decreased and had most effect on the aerodynamic characteristics of axial force and its derivative coefficients.
Abstract: With the aim of improving nutritional profile and antioxidant capacity of gluten-free cookies, blueberry pomace, by-product of juice production, was processed into a new food ingredient by drying and grinding and used for a gluten-free cookie formulation. Since the quality of a baked product is highly influenced by the baking conditions, the objective of this work was to optimize the baking time and thickness of dough pieces, by applying Response Surface Methodology (RSM) in order to obtain the best technological quality of the cookies. The experiments were carried out according to a Central Composite Design (CCD) by selecting the dough thickness and baking time as independent variables, while hardness, color parameters (L*, a* and b* values), water activity, diameter and short/long ratio were response variables. According to the results of RSM analysis, the baking time of 13.74min and dough thickness of 4.08mm was found to be the optimal for the baking temperature of 170°C. As similar optimal parameters were obtained by previously conducted experiment based on sensory analysis, response surface methodology (RSM) can be considered as a suitable approach to optimize the baking process.
Abstract: The simulation of extrusion process is studied widely
in order to both increase products and improve quality, with broad
application in wire coating. The annular tube-tooling extrusion was
set up by a model that is termed as Navier-Stokes equation in
addition to a rheological model of differential form based on singlemode
exponential Phan-Thien/Tanner constitutive equation in a twodimensional
cylindrical coordinate system for predicting the
contraction point of the polymer melt beyond the die. Numerical
solutions are sought through semi-implicit Taylor-Galerkin pressurecorrection
finite element scheme. The investigation was focused on
incompressible creeping flow with long relaxation time in terms of
Weissenberg numbers up to 200. The isothermal case was considered
with surface tension effect on free surface in extrudate flow and no
slip at die wall. The Stream Line Upwind Petrov-Galerkin has been
proposed to stabilize solution. The structure of mesh after die exit
was adjusted following prediction of both top and bottom free
surfaces so as to keep the location of contraction point around one
unit length which is close to experimental results. The simulation of
extrusion process is studied widely in order to both increase products
and improve quality, with broad application in wire coating. The
annular tube-tooling extrusion was set up by a model that is termed
as Navier-Stokes equation in addition to a rheological model of
differential form based on single-mode exponential Phan-
Thien/Tanner constitutive equation in a two-dimensional cylindrical
coordinate system for predicting the contraction point of the polymer
melt beyond the die. Numerical solutions are sought through semiimplicit
Taylor-Galerkin pressure-correction finite element scheme.
The investigation was focused on incompressible creeping flow with
long relaxation time in terms of Weissenberg numbers up to 200. The
isothermal case was considered with surface tension effect on free
surface in extrudate flow and no slip at die wall. The Stream Line
Upwind Petrov-Galerkin has been proposed to stabilize solution. The
structure of mesh after die exit was adjusted following prediction of
both top and bottom free surfaces so as to keep the location of
contraction point around one unit length which is close to
experimental results.
Abstract: Proteins or genes that have similar sequences are likely to perform the same function. One of the most widely used techniques for sequence comparison is sequence alignment. Sequence alignment allows mismatches and insertion/deletion, which represents biological mutations. Sequence alignment is usually performed only on two sequences. Multiple sequence alignment, is a natural extension of two-sequence alignment. In multiple sequence alignment, the emphasis is to find optimal alignment for a group of sequences. Several applicable techniques were observed in this research, from traditional method such as dynamic programming to the extend of widely used stochastic optimization method such as Genetic Algorithms (GAs) and Simulated Annealing. A framework with combination of Genetic Algorithm and Simulated Annealing is presented to solve Multiple Sequence Alignment problem. The Genetic Algorithm phase will try to find new region of solution while Simulated Annealing can be considered as an alignment improver for any near optimal solution produced by GAs.
Abstract: In this paper we have suggested a new system for egovernment.
In this method a government can design a precise and
perfect system to control people and organizations by using five
major documents. These documents contain the important
information of each member of a society and help all organizations to
do their informatics tasks through them. This information would be
available by only a national code and a secure program would
support it. The suggested system can give a good awareness to the
society and help it be managed correctly.
Abstract: This paper presents a vertical silicon nanowire n- MOSFET integrated with a CMOS-compatible fully-silicided (FUSI) NiSi2 gate. Devices with nanowire diameter of 50nm show good electrical performance (SS < 70mV/dec, DIBL < 30mV/V, Ion/Ioff > 107). Most significantly, threshold voltage tunability of about 0.2V is shown. Although threshold voltage remains low for the 50nm diameter device, it is expected to become more positive as nanowire diameter reduces.
Abstract: A new fast correlation algorithm for calibrating the
wavelength of Optical Spectrum Analyzers (OSAs) was introduced
in [1]. The minima of acetylene gas spectra were measured and
correlated with saved theoretical data [2]. So it is possible to find the
correct wavelength calibration data using a noisy reference spectrum.
First tests showed good algorithmic performance for gas line spectra
with high noise. In this article extensive performance tests were made
to validate the noise resistance of this algorithm. The filter and
correlation parameters of the algorithm were optimized for improved
noise performance. With these parameters the performance of this
wavelength calibration was simulated to predict the resulting
wavelength error in real OSA systems. Long term simulations were
made to evaluate the performance of the algorithm over the lifetime
of a real OSA.
Abstract: The aim of the present study was to develop and
validate an inexpensive and simple high performance liquid
chromatographic (HPLC) method for the determination of colistin
sulfate. Separation of colistin sulfate was achieved on a ZORBAX
Eclipse XDB-C18 column using UV detection at λ=215 nm. The
mobile phase was 30 mM sulfate buffer (pH 2.5):acetonitrile(76:24).
An excellent linearity (r2=0.998) was found in the concentration
range of 25 - 400 μg/mL. Intra- day and inter-day precisions of
method (%RSD, n=3) were less than 7.9%.The developed and
validated method was applied to determination of the content of
colistin sulfate in medicated premix and animal feed sample.The
recovery of colistin from animal feed was satisfactorily ranged from
90.92 to 93.77%. The results demonstrated that the HPLC method
developed in this work is appropriate for direct determination of
colistin sulfate in commercial medicated premixes and animal feed.
Abstract: Speckled images arise when coherent microwave,
optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar
systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted
by speckle noise is complicated by the nature of the noise and is not
as straightforward as detection and estimation in additive noise. In
this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The
motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this
context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series
of Laguerre weighted exponential functions, resulting in a doubly
stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form.
It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an
exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.
Abstract: Wireless sensor networks (WSN) consists of many
sensor nodes that are placed on unattended environments such as
military sites in order to collect important information.
Implementing a secure protocol that can prevent forwarding forged
data and modifying content of aggregated data and has low delay
and overhead of communication, computing and storage is very
important. This paper presents a new protocol for concealed data
aggregation (CDA). In this protocol, the network is divided to
virtual cells, nodes within each cell produce a shared key to send
and receive of concealed data with each other. Considering to data
aggregation in each cell is locally and implementing a secure
authentication mechanism, data aggregation delay is very low and
producing false data in the network by malicious nodes is not
possible. To evaluate the performance of our proposed protocol, we
have presented computational models that show the performance
and low overhead in our protocol.