Abstract: Wireless Sensor Networks can be used to monitor the
physical phenomenon in such areas where human approach is nearly
impossible. Hence the limited power supply is the major constraint of
the WSNs due to the use of non-rechargeable batteries in sensor
nodes. A lot of researches are going on to reduce the energy
consumption of sensor nodes. Energy map can be used with
clustering, data dissemination and routing techniques to reduce the
power consumption of WSNs. Energy map can also be used to know
which part of the network is going to fail in near future. In this paper,
Energy map is constructed using the prediction based approach.
Adaptive alpha GM(1,1) model is used as the prediction model.
GM(1,1) is being used worldwide in many applications for predicting
future values of time series using some past values due to its high
computational efficiency and accuracy.
Abstract: Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.
Abstract: The main purpose of this paper is to investigate a discrete time three–species food chain system with ratio dependence. By employing coincidence degree theory and analysis techniques, sufficient conditions for existence of periodic solutions are established.
Abstract: Digital watermarking is one of the techniques for
copyright protection. In this paper, a normalization-based robust
image watermarking scheme which encompasses singular value
decomposition (SVD) and discrete cosine transform (DCT)
techniques is proposed. For the proposed scheme, the host image is
first normalized to a standard form and divided into non-overlapping
image blocks. SVD is applied to each block. By concatenating the
first singular values (SV) of adjacent blocks of the normalized image,
a SV block is obtained. DCT is then carried out on the SV blocks to
produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency
band of a SVD-DCT block by imposing a particular
relationship between two pseudo-randomly selected DCT
coefficients. An adaptive frequency mask is used to adjust local
watermark embedding strength. Watermark extraction involves
mainly the inverse process. The watermark extracting method is blind
and efficient. Experimental results show that the quality degradation
of watermarked image caused by the embedded watermark is visually
transparent. Results also show that the proposed scheme is robust
against various image processing operations and geometric attacks.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.
Abstract: Ferroresonance is an electrical phenomenon in
nonlinear character, which frequently occurs in power system due to
transmission line faults and single or more-phase switching on the
lines as well as usage of the saturable transformers. In this study, the
ferroresonance phenomena are investigated under the modeling of the
West Anatolian Electric Power Network of 380 kV in Turkey. The
ferroresonance event is observed as a result of removing the loads at
the end of the lines. In this sense, two different cases are considered.
At first, the switching is applied at 2nd second and the ferroresonance
affects are observed between 2nd and 4th seconds in the voltage
variations of the phase-R. Hence the ferroresonance and nonferroresonance
parts of the overall data are compared with each
others using the Fourier transform techniques to show the
ferroresonance affects.
Abstract: Linear stochastic estimation and quadratic stochastic
estimation techniques were applied to estimate the entire velocity
flow-field of an open cavity with a length to depth ratio of 2. The
estimations were done through the use of instantaneous velocity
magnitude as estimators. These measurements were obtained by
Particle Image Velocimetry. The predicted flow was compared
against the original flow-field in terms of the Reynolds stresses and
turbulent kinetic energy. Quadratic stochastic estimation proved to be
more superior than linear stochastic estimation in resolving the shear
layer flow. When the velocity fluctuations were scaled up in the
quadratic estimate, both the time-averaged quantities and the
instantaneous cavity flow can be predicted to a rather accurate extent.
Abstract: A boundary layer wind tunnel facility has been
adopted in order to conduct experimental measurements of the flow field around a model of the Panorama Giustinelli Building, Trieste
(Italy). Information on the main flow structures has been obtained by means of flow visualization techniques and has been compared to the
numerical predictions of the vortical structures spread on top of the roof, in order to investigate the optimal positioning for a vertical-axis
wind energy conversion system, registering a good agreement between experimental measurements and numerical predictions.
Abstract: The plastic forming process of sheet plate takes an
important place in forming metals. The traditional techniques of tool
design for sheet forming operations used in industry are experimental
and expensive methods. Prediction of the forming results,
determination of the punching force, blank holder forces and the
thickness distribution of the sheet metal will decrease the production
cost and time of the material to be formed. In this paper, multi-stage
deep drawing simulation of an Industrial Part has been presented
with finite element method. The entire production steps with
additional operations such as intermediate annealing and springback
has been simulated by ABAQUS software under axisymmetric
conditions. The simulation results such as sheet thickness
distribution, Punch force and residual stresses have been extracted in
any stages and sheet thickness distribution was compared with
experimental results. It was found through comparison of results, the
FE model have proven to be in close agreement with those of
experiment.
Abstract: Medical image modalities such as computed
tomography (CT), magnetic resonance imaging (MRI), ultrasound
(US), X-ray are adapted to diagnose disease. These modalities
provide flexible means of reviewing anatomical cross-sections and
physiological state in different parts of the human body. The raw
medical images have a huge file size and need large storage
requirements. So it should be such a way to reduce the size of those
image files to be valid for telemedicine applications. Thus the image
compression is a key factor to reduce the bit rate for transmission or
storage while maintaining an acceptable reproduction quality, but it is
natural to rise the question of how much an image can be compressed
and still preserve sufficient information for a given clinical
application. Many techniques for achieving data compression have
been introduced. In this study, three different MRI modalities which
are Brain, Spine and Knee have been compressed and reconstructed
using wavelet transform. Subjective and objective evaluation has
been done to investigate the clinical information quality of the
compressed images. For the objective evaluation, the results show
that the PSNR which indicates the quality of the reconstructed image
is ranging from (21.95 dB to 30.80 dB, 27.25 dB to 35.75 dB, and
26.93 dB to 34.93 dB) for Brain, Spine, and Knee respectively. For
the subjective evaluation test, the results show that the compression
ratio of 40:1 was acceptable for brain image, whereas for spine and
knee images 50:1 was acceptable.
Abstract: It has become crucial over the years for nations to
improve their credit scoring methods and techniques in light of the
increasing volatility of the global economy. Statistical methods or
tools have been the favoured means for this; however artificial
intelligence or soft computing based techniques are becoming
increasingly preferred due to their proficient and precise nature and
relative simplicity. This work presents a comparison between Support
Vector Machines and Artificial Neural Networks two popular soft
computing models when applied to credit scoring. Amidst the
different criteria-s that can be used for comparisons; accuracy,
computational complexity and processing times are the selected
criteria used to evaluate both models. Furthermore the German credit
scoring dataset which is a real world dataset is used to train and test
both developed models. Experimental results obtained from our study
suggest that although both soft computing models could be used with
a high degree of accuracy, Artificial Neural Networks deliver better
results than Support Vector Machines.
Abstract: This paper presents a Reliability-Based Topology
Optimization (RBTO) based on Evolutionary Structural Optimization
(ESO). An actual design involves uncertain conditions such as
material property, operational load and dimensional variation.
Deterministic Topology Optimization (DTO) is obtained without
considering of the uncertainties related to the uncertainty parameters.
However, RBTO involves evaluation of probabilistic constraints,
which can be done in two different ways, the reliability index
approach (RIA) and the performance measure approach (PMA). Limit
state function is approximated using Monte Carlo Simulation and
Central Composite Design for reliability analysis. ESO, one of the
topology optimization techniques, is adopted for topology
optimization. Numerical examples are presented to compare the DTO
with RBTO.
Abstract: Environmental micro-organisms include a large number of taxa and some species that are generally considered nonpathogenic, but can represent a risk in certain conditions, especially for elderly people and immunocompromised individuals. Chemotaxonomic identification techniques are powerful tools for environmental micro-organisms, and cellular fatty acid methyl esters (FAME) content is a powerful fingerprinting identification technique. A system based on an unsupervised artificial neural network (ANN) was set up using the fatty acid profiles of standard bacterial strains, obtained by gas-chromatography, used as learning data. We analysed 45 certified strains belonging to Acinetobacter, Aeromonas, Alcaligenes, Aquaspirillum, Arthrobacter, Bacillus, Brevundimonas, Enterobacter, Flavobacterium, Micrococcus, Pseudomonas, Serratia, Shewanella and Vibrio genera. A set of 79 bacteria isolated from a drinking water line (AMGA, the major water supply system in Genoa) were used as an example for identification compared to standard MIDI method. The resulting ANN output map was found to be a very powerful tool to identify these fresh isolates.
Abstract: In face recognition, feature extraction techniques
attempts to search for appropriate representation of the data. However,
when the feature dimension is larger than the samples size, it brings
performance degradation. Hence, we propose a method called
Normalization Discriminant Independent Component Analysis
(NDICA). The input data will be regularized to obtain the most
reliable features from the data and processed using Independent
Component Analysis (ICA). The proposed method is evaluated on
three face databases, Olivetti Research Ltd (ORL), Face Recognition
Technology (FERET) and Face Recognition Grand Challenge
(FRGC). NDICA showed it effectiveness compared with other
unsupervised and supervised techniques.
Abstract: Several combinations of the preprocessing algorithms,
feature selection techniques and classifiers can be applied to the data
classification tasks. This study introduces a new accurate classifier,
the proposed classifier consist from four components: Signal-to-
Noise as a feature selection technique, support vector machine,
Bayesian neural network and AdaBoost as an ensemble algorithm.
To verify the effectiveness of the proposed classifier, seven well
known classifiers are applied to four datasets. The experiments show
that using the suggested classifier enhances the classification rates for
all datasets.
Abstract: In the last years, the computers have increased their capacity of calculus and networks, for the interconnection of these machines. The networks have been improved until obtaining the actual high rates of data transferring. The programs that nowadays try to take advantage of these new technologies cannot be written using the traditional techniques of programming, since most of the algorithms were designed for being executed in an only processor,in a nonconcurrent form instead of being executed concurrently ina set of processors working and communicating through a network.This paper aims to present the ongoing development of a new system for the reconfiguration of grouping of computers, taking into account these new technologies.
Abstract: In this paper, Optimum adaptive loading algorithms
are applied to multicarrier system with Space-Time Block Coding
(STBC) scheme associated with space-time processing based on
singular-value decomposition (SVD) of the channel matrix over
Rayleigh fading channels. SVD method has been employed in
MIMO-OFDM system in order to overcome subchannel interference.
Chaw-s and Compello-s algorithms have been implemented to obtain
a bit and power allocation for each subcarrier assuming instantaneous
channel knowledge. The adaptive loaded SVD-STBC scheme is
capable of providing both full-rate and full-diversity for any number
of transmit antennas. The effectiveness of these techniques has
demonstrated through the simulation of an Adaptive loaded SVDSTBC
system, and the comparison shown that the proposed
algorithms ensure better performance in the case of MIMO.
Abstract: A reduced order modeling approach for natural
gas transient flow in pipelines is presented. The Euler
equations are considered as the governing equations and
solved numerically using the implicit Steger-Warming flux
vector splitting method. Next, the linearized form of the
equations is derived and the corresponding eigensystem is
obtained. Then, a few dominant flow eigenmodes are used to
construct an efficient reduced-order model. A well-known test
case is presented to demonstrate the accuracy and the
computational efficiency of the proposed method. The results
obtained are in good agreement with those of the direct
numerical method and field data. Moreover, it is shown that
the present reduced-order model is more efficient than the
conventional numerical techniques for transient flow analysis
of natural gas in pipelines.
Abstract: Among various HLM techniques, the Multivariate Hierarchical Linear Model (MHLM) is desirable to use, particularly when multivariate criterion variables are collected and the covariance structure has information valuable for data analysis. In order to reflect prior information or to obtain stable results when the sample size and the number of groups are not sufficiently large, the Bayes method has often been employed in hierarchical data analysis. In these cases, although the Markov Chain Monte Carlo (MCMC) method is a rather powerful tool for parameter estimation, Procedures regarding MCMC have not been formulated for MHLM. For this reason, this research presents concrete procedures for parameter estimation through the use of the Gibbs samplers. Lastly, several future topics for the use of MCMC approach for HLM is discussed.