Abstract: This paper present the implementation of a new ordering strategy on Successive Overrelaxation scheme on two dimensional boundary value problems. The strategy involve two directions alternatingly; from top and bottom of the solution domain. The method shows to significantly reduce the iteration number to converge. Four numerical experiments were carried out to examine the performance of the new strategy.
Abstract: Transmission network expansion planning (TNEP) is
a basic part of power system planning that determines where, when
and how many new transmission lines should be added to the
network. Up till now, various methods have been presented to solve
the static transmission network expansion planning (STNEP)
problem. But in all of these methods, transmission expansion
planning considering network adequacy restriction has not been
investigated. Thus, in this paper, STNEP problem is being studied
considering network adequacy restriction using discrete particle
swarm optimization (DPSO) algorithm. The goal of this paper is
obtaining a configuration for network expansion with lowest
expansion cost and a specific adequacy. The proposed idea has been
tested on the Garvers network and compared with the decimal
codification genetic algorithm (DCGA). The results show that the
network will possess maximum efficiency economically. Also, it is
shown that precision and convergence speed of the proposed DPSO
based method for the solution of the STNEP problem is more than
DCGA approach.
Abstract: The main aim of this work is to establish the
capabilities of new green buildings to ascertain off-grid electricity
generation based on the integration of wind turbines in the
conceptual model of a rotating tower [2] in Dubai. An in depth
performance analysis of the WinWind 3.0MW [3] wind turbine is
performed. Data based on the Dubai Meteorological Services is
collected and analyzed in conjunction with the performance analysis
of this wind turbine. The mathematical model is compared with
Computational Fluid Dynamics (CFD) results based on a conceptual
rotating tower design model. The comparison results are further
validated and verified for accuracy by conducting experiments on a
scaled prototype of the tower design. The study concluded that
integrating wind turbines inside a rotating tower can generate enough
electricity to meet the required power consumption of the building,
which equates to a wind farm containing 9 horizontal axis wind
turbines located at an approximate area of 3,237,485 m2 [14].
Abstract: This paper study the segmented split capacitor
Digital-to-Analog Converter (DAC) implemented in a differentialtype
12-bit Successive Approximation Analog-to-Digital Converter
(SA-ADC). The series capacitance split array method employed as it
reduced the total area of the capacitors required for high resolution
DACs. A 12-bit regular binary array structure requires 2049 unit
capacitors (Cs) while the split array needs 127 unit Cs. These results
in the reduction of the total capacitance and power consumption of
the series split array architectures as to regular binary-weighted
structures. The paper will show the 12-bit DAC series split capacitor
with 4-bit thermometer coded DAC architectures as well as the
simulation and measured results.
Abstract: This paper presents a novel iris recognition system
using 1D log polar Gabor wavelet and Euler numbers. 1D log polar
Gabor wavelet is used to extract the textural features, and Euler
numbers are used to extract topological features of the iris. The
proposed decision strategy uses these features to authenticate an
individual-s identity while maintaining a low false rejection rate. The
algorithm was tested on CASIA iris image database and found to
perform better than existing approaches with an overall accuracy of
99.93%.
Abstract: A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.
Abstract: A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.
Abstract: Bendability is constrained by maximum top roller
load imparting capacity of the machine. Maximum load is
encountered during the edge pre-bending stage of roller bending.
Capacity of 3-roller plate bending machine is specified by
maximum thickness and minimum shell diameter combinations that
can be pre-bend for given plate material of maximum width.
Commercially available plate width or width of the plate that can be
accommodated on machine decides the maximum rolling width.
Original equipment manufacturers (OEM) provide the machine
capacity chart based on reference material considering perfectly
plastic material model. Reported work shows the bendability analysis
of heavy duty 3-roller plate bending machine. The input variables for
the industry are plate thickness, shell diameter and material property
parameters, as it is fixed by the design. Analytical models of
equivalent thickness, equivalent width and maximum width based on
power law material model were derived to study the bendability.
Equation of maximum width provides bendability for designed
configuration i.e. material property, shell diameter and thickness
combinations within the machine limitations. Equivalent thicknesses
based on perfectly plastic and power law material model were
compared for four different materials grades of C-Mn steel in order
to predict the bend-ability. Effect of top roller offset on the
bendability at maximum top roller load imparting capacity is
reported.
Abstract: There are some existing Java benchmarks, application benchmarks as well as micro benchmarks or mixture both of them,such as: Java Grande, Spec98, CaffeMark, HBech, etc. But none of them deal with behaviors of multi tasks operating systems. As a result, the achieved outputs are not satisfied for performance evaluation engineers. Behaviors of multi tasks operating systems are based on a schedule management which is employed in these systems. Different processes can have different priority to share the same resources. The time is measured by estimating from applications started to it is finished does not reflect the real time value which the system need for running those programs. New approach to this problem should be done. Having said that, in this paper we present a new Java benchmark, named FHOJ benchmark, which directly deals with multi tasks behaviors of a system. Our study shows that in some cases, results from FHOJ benchmark are far more reliable in comparison with some existing Java benchmarks.
Abstract: Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.
Abstract: This paper presents the averaging model of a buck
converter derived from the generalized state-space averaging method.
The sliding mode control is used to regulate the output voltage of the
converter and taken into account in the model. The proposed model
requires the fast computational time compared with those of the full
topology model. The intensive time-domain simulations via the exact
topology model are used as the comparable model. The results show
that a good agreement between the proposed model and the switching
model is achieved in both transient and steady-state responses. The
reported model is suitable for the optimal controller design by using
the artificial intelligence techniques.
Abstract: Problems on algebraical polynomials appear in many fields of mathematics and computer science. Especially the task of determining the roots of polynomials has been frequently investigated.Nonetheless, the task of locating the zeros of complex polynomials is still challenging. In this paper we deal with the location of zeros of univariate complex polynomials. We prove some novel upper bounds for the moduli of the zeros of complex polynomials. That means, we provide disks in the complex plane where all zeros of a complex polynomial are situated. Such bounds are extremely useful for obtaining a priori assertations regarding the location of zeros of polynomials. Based on the proven bounds and a test set of polynomials, we present an experimental study to examine which bound is optimal.
Abstract: The aim of this study was to screen for
microorganism that able to utilize 3-N-trimethylamino-1-propanol
(homocholine) as a sole source of carbon and nitrogen. The aerobic
degradation of homocholine has been found by a gram-positive
Rhodococcus sp. bacterium isolated from soil. The isolate was
identified as Rhodococcus sp. strain A4 based on the phenotypic
features, physiologic and biochemical characteristics, and
phylogenetic analysis. The cells of the isolated strain grown on both
basal-TMAP and nutrient agar medium displayed elementary
branching mycelia fragmented into irregular rod and coccoid
elements. Comparative 16S rDNA sequencing studies indicated that
the strain A4 falls into the Rhodococcus erythropolis subclade and
forms a monophyletic group with the type-strains of R. opacus, and
R. wratislaviensis. Metabolites analysis by capillary electrophoresis,
fast atom bombardment-mass spectrometry, and gas
chromatography- mass spectrometry, showed trimethylamine (TMA)
as the major metabolite beside β-alanine betaine and
trimethylaminopropionaldehyde. Therefore, the possible degradation
pathway of trimethylamino propanol in the isolated strain is through
consequence oxidation of alcohol group (-OH) to aldehyde (-CHO)
and acid (-COOH), and thereafter the cleavage of β-alanine betaine
C-N bonds yielded trimethylamine and alkyl chain.
Abstract: Distant-talking voice-based HCI system suffers from
performance degradation due to mismatch between the acoustic
speech (runtime) and the acoustic model (training). Mismatch is
caused by the change in the power of the speech signal as observed at
the microphones. This change is greatly influenced by the change in
distance, affecting speech dynamics inside the room before reaching
the microphones. Moreover, as the speech signal is reflected, its
acoustical characteristic is also altered by the room properties. In
general, power mismatch due to distance is a complex problem. This
paper presents a novel approach in dealing with distance-induced
mismatch by intelligently sensing instantaneous voice power variation
and compensating model parameters. First, the distant-talking speech
signal is processed through microphone array processing, and the
corresponding distance information is extracted. Distance-sensitive
Gaussian Mixture Models (GMMs), pre-trained to capture both
speech power and room property are used to predict the optimal
distance of the speech source. Consequently, pre-computed statistic
priors corresponding to the optimal distance is selected to correct
the statistics of the generic model which was frozen during training.
Thus, model combinatorics are post-conditioned to match the power
of instantaneous speech acoustics at runtime. This results to an
improved likelihood in predicting the correct speech command at
farther distances. We experiment using real data recorded inside two
rooms. Experimental evaluation shows voice recognition performance
using our method is more robust to the change in distance compared
to the conventional approach. In our experiment, under the most
acoustically challenging environment (i.e., Room 2: 2.5 meters), our
method achieved 24.2% improvement in recognition performance
against the best-performing conventional method.
Abstract: Nowadays, with the emerging of the new applications
like robot control in image processing, artificial vision for visual
servoing is a rapidly growing discipline and Human-machine
interaction plays a significant role for controlling the robot. This
paper presents a new algorithm based on spatio-temporal volumes for
visual servoing aims to control robots. In this algorithm, after
applying necessary pre-processing on video frames, a spatio-temporal
volume is constructed for each gesture and feature vector is extracted.
These volumes are then analyzed for matching in two consecutive
stages. For hand gesture recognition and classification we tested
different classifiers including k-Nearest neighbor, learning vector
quantization and back propagation neural networks. We tested the
proposed algorithm with the collected data set and results showed the
correct gesture recognition rate of 99.58 percent. We also tested the
algorithm with noisy images and algorithm showed the correct
recognition rate of 97.92 percent in noisy images.
Abstract: Current advancements in nanotechnology are dependent on the capabilities that can enable nano-scientists to extend their eyes and hands into the nano-world. For this purpose, a haptics (devices capable of recreating tactile or force sensations) based system for AFM (Atomic Force Microscope) is proposed. The system enables the nano-scientists to touch and feel the sample surfaces, viewed through AFM, in order to provide them with better understanding of the physical properties of the surface, such as roughness, stiffness and shape of molecular architecture. At this stage, the proposed work uses of ine images produced using AFM and perform image analysis to create virtual surfaces suitable for haptics force analysis. The research work is in the process of extension from of ine to online process where interaction will be done directly on the material surface for realistic analysis.
Abstract: This paper presents an efficient method of obtaining a straight-line motion in the tool configuration space using an articulated robot between two specified points. The simulation results & the implementation results show the effectiveness of the method.
Abstract: Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.
Abstract: In this paper, we propose a novel frequency offset
estimation scheme for orthogonal frequency division multiplexing
(OFDM) systems. By correlating the OFDM signals within the coherence
phase bandwidth and employing a threshold in the frequency
offset estimation process, the proposed scheme is not only robust to
the timing offset but also has a reduced complexity compared with
that of the conventional scheme. Moreover, a timing offset estimation
scheme is also proposed as the next stage of the proposed frequency
offset estimation. Numerical results show that the proposed scheme
can estimate frequency offset with lower computational complexity
and does not require additional memory while maintaining the same
level of estimation performance.
Abstract: One of the essential requirements of a realistic
surgical simulator is to reproduce haptic sensations due to the
interactions in the virtual environment. However, the interaction need
to be performed in real-time, since a delay between the user action
and the system reaction reduces the immersion sensation. In this
paper, a prototype of a coronary stent implant simulator is present;
this system allows real-time interactions with an artery by means of a
specific haptic device. To improve the realism of the simulation, the
building of the virtual environment is based on real patients- images
and a Web Portal is used to search in the geographically remote
medical centres a virtual environment with specific features in terms
of pathology or anatomy. The functional architecture of the system
defines several Medical Centres in which virtual environments built
from the real patients- images and related metadata with specific
features in terms of pathology or anatomy are stored. The searched
data are downloaded from the Medical Centre to the Training Centre
provided with a specific haptic device and with the software
necessary both to manage the interaction in the virtual environment.
After the integration of the virtual environment in the simulation
system it is possible to perform training on the specific surgical
procedure.