Abstract: Modern manufacturing facilities are large scale,
highly complex, and operate with large number of variables under
closed loop control. Early and accurate fault detection and diagnosis
for these plants can minimise down time, increase the safety of plant
operations, and reduce manufacturing costs. Fault detection and
isolation is more complex particularly in the case of the faulty analog
control systems. Analog control systems are not equipped with
monitoring function where the process parameters are continually
visualised. In this situation, It is very difficult to find the relationship
between the fault importance and its consequences on the product
failure. We consider in this paper an approach to fault detection and
analysis of its effect on the production quality using an adaptive
centring and scaling in the pickling process in cold rolling. The fault
appeared on one of the power unit driving a rotary machine, this
machine can not track a reference speed given by another machine.
The length of metal loop is then in continuous oscillation, this affects
the product quality. Using a computerised data acquisition system,
the main machine parameters have been monitored. The fault has
been detected and isolated on basis of analysis of monitored data.
Normal and faulty situation have been obtained by an artificial neural
network (ANN) model which is implemented to simulate the normal
and faulty status of rotary machine. Correlation between the product
quality defined by an index and the residual is used to quality
classification.
Abstract: Having a very many number of pipelines all over the
country, Iran is one of the countries consists of various ecosystems
with variable degrees of fragility and robusticity as well as
geographical conditions. This study presents a state-of-the-art method
to estimate environmental risks of pipelines by recommending
rational equations including FES, URAS, SRS, RRS, DRS, LURS
and IRS as well as FRS to calculate the risks. This study was carried
out by a relative semi-quantitative approach based on land uses and
HVAs (High-Value Areas). GIS as a tool was used to create proper
maps regarding the environmental risks, land uses and distances. The
main logic for using the formulas was the distance-based approaches
and ESI as well as intersections. Summarizing the results of the
study, a risk geographical map based on the ESIs and final risk score
(FRS) was created. The study results showed that the most sensitive
and so of high risk area would be an area comprising of mangrove
forests located in the pipeline neighborhood. Also, salty lands were
the most robust land use units in the case of pipeline failure
circumstances. Besides, using a state-of-the-art method, it showed
that mapping the risks of pipelines out with the applied method is of
more reliability and convenience as well as relative
comprehensiveness in comparison to present non-holistic methods for
assessing the environmental risks of pipelines. The focus of the
present study is “assessment" than that of “management". It is
suggested that new policies are to be implemented to reduce the
negative effects of the pipeline that has not yet been constructed
completely
Abstract: The paper presents an on-line recognition machine
(RM) for continuous/isolated, dynamic and static gestures that arise
in Flight Deck Officer (FDO) training. RM is based on generic pattern
recognition framework. Gestures are represented as templates using
summary statistics. The proposed recognition algorithm exploits temporal
and spatial characteristics of gestures via dynamic programming
and Markovian process. The algorithm predicts corresponding index
of incremental input data in the templates in an on-line mode.
Accumulated consistency in the sequence of prediction provides a
similarity measurement (Score) between input data and the templates.
The algorithm provides an intuitive mechanism for automatic detection
of start/end frames of continuous gestures. In the present paper,
we consider isolated gestures. The performance of RM is evaluated
using four datasets - artificial (W TTest), hand motion (Yang) and
FDO (tracker, vision-based ). RM achieves comparable results which
are in agreement with other on-line and off-line algorithms such as
hidden Markov model (HMM) and dynamic time warping (DTW).
The proposed algorithm has the additional advantage of providing
timely feedback for training purposes.
Abstract: In this paper, we were introduces a skin detection
method using a histogram approximation based on the mean shift
algorithm. The proposed method applies the mean shift procedure to a
histogram of a skin map of the input image, generated by comparison
with standard skin colors in the CbCr color space, and divides the
background from the skin region by selecting the maximum value
according to brightness level. The proposed method detects the skin
region using the mean shift procedure to determine a maximum value
that becomes the dividing point, rather than using a manually selected
threshold value, as in existing techniques. Even when skin color is
contaminated by illumination, the procedure can accurately segment
the skin region and the background region. The proposed method may
be useful in detecting facial regions as a pretreatment for face
recognition in various types of illumination.
Abstract: Sleep spindles are the most interesting hallmark of
stage 2 sleep EEG. Their accurate identification in a
polysomnographic signal is essential for sleep professionals to help
them mark Stage 2 sleep. Sleep Spindles are also promising objective
indicators for neurodegenerative disorders. Visual spindle scoring
however is a tedious workload. In this paper three different
approaches are used for the automatic detection of sleep spindles:
Short Time Fourier Transform, Wavelet Transform and Wave
Morphology for Spindle Detection. In order to improve the results, a
combination of the three detectors is presented and comparison with
human expert scorers is performed. The best performance is obtained
with a combination of the three algorithms which resulted in a
sensitivity and specificity of 94% when compared to human expert
scorers.
Abstract: Microtomographic images and thin section (TS)
images were analyzed and compared against some parameters of
geological interest such as porosity and its distribution along the
samples. The results show that microtomography (CT) analysis,
although limited by its resolution, have some interesting information
about the distribution of porosity (homogeneous or not) and can also
quantify the connected and non-connected pores, i.e., total porosity.
TS have no limitations concerning resolution, but are limited by the
experimental data available in regards to a few glass sheets for
analysis and also can give only information about the connected
pores, i.e., effective porosity. Those two methods have their own
virtues and flaws but when paired together they are able to
complement one another, making for a more reliable and complete
analysis.
Abstract: Thirty three re-wetting tests were conducted at
different combinations of temperatures (5.7- 46.30C) and relative
humidites (48.2-88.6%) with barley. Two most commonly used thinlayer
drying and rewetting models i.e. Page and Diffusion were
compared for their ability to the fit the experimental re-wetting data
based on the standard error of estimate (SEE) of the measured and
simulated moisture contents. The comparison shows both the Page
and Diffusion models fit the re-wetting experimental data of barley
well. The average SEE values for the Page and Diffusion models
were 0.176 % d.b. and 0.199 % d.b., respectively. The Page and
Diffusion models were found to be most suitable equations, to
describe the thin-layer re-wetting characteristics of barley over a
typically five day re-wetting. These two models can be used for the
simulation of deep-bed re-wetting of barley occurring during
ventilated storage and deep bed drying.
Abstract: Recently, Genetic Algorithms (GA) and Differential
Evolution (DE) algorithm technique have attracted considerable
attention among various modern heuristic optimization techniques.
Since the two approaches are supposed to find a solution to a given
objective function but employ different strategies and computational
effort, it is appropriate to compare their performance. This paper
presents the application and performance comparison of DE and GA
optimization techniques, for flexible ac transmission system
(FACTS)-based controller design. The design objective is to enhance
the power system stability. The design problem of the FACTS-based
controller is formulated as an optimization problem and both the PSO
and GA optimization techniques are employed to search for optimal
controller parameters. The performance of both optimization
techniques has been compared. Further, the optimized controllers are
tested on a weekly connected power system subjected to different
disturbances, and their performance is compared with the
conventional power system stabilizer (CPSS). The eigenvalue
analysis and non-linear simulation results are presented and
compared to show the effectiveness of both the techniques in
designing a FACTS-based controller, to enhance power system
stability.
Abstract: Removal of PCP by a system combining
biodegradation by biofilm and adsorption was investigated here.
Three studies were conducted employing batch tests, sequencing
batch reactor (SBR) and continuous biofilm activated carbon
column reactor (BACCOR). The combination of biofilm-GAC
batch process removed about 30% more PCP than GAC adsorption
alone. For the SBR processes, both the suspended and attached
biomass could remove more than 90% of the PCP after
acclimatisation. BACCOR was able to remove more than 98% of
PCP-Na at concentrations ranging from 10 to 100 mg/L, at empty
bed contact time (EBCT) ranging from 0.75 to 4 hours. Pure and
mixed cultures from BACCOR were tested for use of PCP as sole
carbon and energy source under aerobic conditions. The isolates
were able to degrade up to 42% of PCP under aerobic conditions in
pure cultures. However, mixed cultures were found able to degrade
more than 99% PCP indicating interdependence of species.
Abstract: Environmental performance of the U.S. States is investigated for the period of 1990 – 2007 using Stochastic Frontier Analysis (SFA). The SFA accounts for both efficiency measure and stochastic noise affecting a frontier. The frontier is formed using indicators of GDP, energy consumption, population, and CO2 emissions. For comparability, all indicators are expressed as ratios to total. Statistical information of the Energy Information Agency of the United States is used. Obtained results reveal the bell - shaped dynamics of environmental efficiency scores. The average efficiency scores rise from 97.6% in 1990 to 99.6% in 1999, and then fall to 98.4% in 2007. The main factor is insufficient decrease in the rate of growth of CO2 emissions with regards to the growth of GDP, population and energy consumption. Data for 2008 following the research period allow for an assumption that the environmental performance of the U.S. States has improved in the last years.
Abstract: Energy intensity(energy consumption intensity) is a
global index which computes the required energy for producing a
specific value of goods and services in each country. It is computed
in terms of initial energy supply or final energy consumption. In this
study (research) Divisia method is used to decompose energy
consumption and energy intensity. This method decomposes
consumption and energy intensity to production effects, structural
and net intensity and could be done as time series or two-periodical.
This study analytically investigates consumption changes and energy
intensity on economical sectors of Iran and more specific on road
transportation(rail road and road).Our results show that the
contribution of structural effect (change in economical activities
combination) is very low and the effect of net energy consumption
has the higher contribution in consumption changes and energy
intensity. In other words, the high consumption of energy is due to
Intensity of energy consumption and is not to structural effect of
transportation sector.
Abstract: The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.
Abstract: A self-evolution algorithm for optimizing neural networks using a combination of PSO and JPSO is proposed. The algorithm optimizes both the network topology and parameters simultaneously with the aim of achieving desired accuracy with less complicated networks. The performance of the proposed approach is compared with conventional back-propagation networks using several synthetic functions, with better results in the case of the former. The proposed algorithm is also implemented on slope stability problem to estimate the critical factor of safety. Based on the results obtained, the proposed self evolving network produced a better estimate of critical safety factor in comparison to conventional BPN network.
Abstract: This study carried out in order to investigate the
effects of salinity on carbon isotope discrimination (Δ) of shoots and
roots of four sugar beet cultivars (cv) including Madison (British
origin) and three Iranian culivars (7233-P12, 7233-P21 and 7233-P29).
Plants were grown in sand culture medium in greenhouse conditions.
Plants irrigated with saline water (tap water as control, 50 mM, 150
mM, 250 mM and 350 mM of NaCl + CaCl2 in 5 to 1 molar ratio)
from 4 leaves stage for 16 weeks. Carbon isotope discrimination
significantly decreased with increasing salinity. Significant
differences of Δ between shoot and root were observed in all cvs and
all levels of salinity. Madison cv showed lower Δ in shoot and root
than other three cvs at all levels of salinity expect control, but cv
7233-P29 had significantly higher Δ values at saline conditions of 150
mM and above. Therefore, Δ might be applicable, as a useful tool, for
study of salinity tolerance of sugar beet genotypes.
Abstract: In April 2009, a new variant of Influenza A virus
subtype H1N1 emerged in Mexico and spread all over the world. The
influenza has three subtypes in human (H1N1, H1N2 and H3N2)
Types B and C influenza tend to be associated with local or regional
epidemics. Preliminary genetic characterization of the influenza
viruses has identified them as swine influenza A (H1N1) viruses.
Nucleotide sequence analysis of the Haemagglutinin (HA) and
Neuraminidase (NA) are similar to each other and the majority of
their genes of swine influenza viruses, two genes coding for the
neuraminidase (NA) and matrix (M) proteins are similar to
corresponding genes of swine influenza. Sequence similarity between
the 2009 A (H1N1) virus and its nearest relatives indicates that its
gene segments have been circulating undetected for an extended
period. Nucleic acid sequence Maximum Likelihood (MCL) and
DNA Empirical base frequencies, Phylogenetic relationship amongst
the HA genes of H1N1 virus isolated in Genbank having high
nucleotide sequence homology.
In this paper we used 16 HA nucleotide sequences from NCBI for
computing sequence relationships similarity of swine influenza A
virus using the following method MCL the result is 28%, 36.64% for
Optimal tree with the sum of branch length, 35.62% for Interior
branch phylogeny Neighber – Join Tree, 1.85% for the overall
transition/transversion, and 8.28% for Overall mean distance.
Abstract: Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.
Abstract: Automatic reading of handwritten cheque is a computationally
complex process and it plays an important role in financial
risk management. Machine vision and learning provide a viable
solution to this problem. Research effort has mostly been focused
on recognizing diverse pitches of cheques and demand drafts with an
identical outline. However most of these methods employ templatematching
to localize the pitches and such schemes could potentially
fail when applied to different types of outline maintained by the
bank. In this paper, the so-called outline problem is resolved by
a cheque information tree (CIT), which generalizes the localizing
method to extract active-region-of-entities. In addition, the weight
based density plot (WBDP) is performed to isolate text entities and
read complete pitches. Recognition is based on texture features using
neural classifiers. Legal amount is subsequently recognized by both
texture and perceptual features. A post-processing phase is invoked
to detect the incorrect readings by Type-2 grammar using the Turing
machine. The performance of the proposed system was evaluated
using cheque and demand drafts of 22 different banks. The test data
consists of a collection of 1540 leafs obtained from 10 different
account holders from each bank. Results show that this approach
can easily be deployed without significant design amendments.
Abstract: Analytical seismic response of multi-story building
supported on base isolation system is investigated under real
earthquake motion. The superstructure is idealized as a shear type
flexible building with lateral degree-of-freedom at each floor. The
force-deformation behaviour of the isolation system is modelled by
the bi-linear behaviour which can be effectively used to model all
isolation systems in practice. The governing equations of motion of
the isolated structural system are derived. The response of the system
is obtained numerically by step-by-method under three real recorded
earthquake motions and pulse motions associated in the near-fault
earthquake motion. The variation of the top floor acceleration, interstory
drift, base shear and bearing displacement of the isolated
building is studied under different initial stiffness of the bi-linear
isolation system. It was observed that the high initial stiffness of the
isolation system excites higher modes in base-isolated structure and
generate floor accelerations and story drift. Such behaviour of the
base-isolated building especially supported on sliding type of
isolation systems can be detrimental to sensitive equipment installed
in the building. On the other hand, the bearing displacement and base
shear found to reduce marginally with the increase of the initial
stiffness of the initial stiffness of the isolation system. Further, the
above behaviour of the base-isolated building was observed for
different parameters of the bearing (i.e. post-yield stiffness and
characteristic strength) and earthquake motions (i.e. real time history
as well as pulse type motion).
Abstract: The necessity of solving multi dimensional
complicated scientific problems beside the necessity of several
objective functions optimization are the most motive reason of born
of artificial intelligence and heuristic methods.
In this paper, we introduce a new method for multiobjective
optimization based on learning automata. In the proposed method,
search space divides into separate hyper-cubes and each cube is
considered as an action. After gathering of all objective functions
with separate weights, the cumulative function is considered as the
fitness function. By the application of all the cubes to the cumulative
function, we calculate the amount of amplification of each action and
the algorithm continues its way to find the best solutions. In this
Method, a lateral memory is used to gather the significant points of
each iteration of the algorithm. Finally, by considering the
domination factor, pareto front is estimated. Results of several
experiments show the effectiveness of this method in comparison
with genetic algorithm based method.
Abstract: This paper is a continuation of our interest in the influence of temperature on specific retention volumes and the resulting infinite dilution activity coefficients. This has a direct effect in the design of absorption and stripping columns for the abatement of volatile organic compounds. The interaction of 13 volatile organic compounds (VOCs) with polydimethylsiloxane (PDMS) at varying temperatures was studied by gas liquid chromatography (GLC). Infinite dilution activity coefficients and specific retention volumes obtained in this study were found to be in agreement with those obtained from static headspace and group contribution methods by the authors as well as literature values for similar systems. Temperature variation also allows for transport calculations for different seasons. The results of this work confirm that PDMS is well suited for the scrubbing of VOCs from waste gas streams. Plots of specific retention volumes against temperature gave linear van-t Hoff plots.