Abstract: In this paper a new approach to face recognition is
presented that achieves double dimension reduction, making the
system computationally efficient with better recognition results and
out perform common DCT technique of face recognition. In pattern
recognition techniques, discriminative information of image
increases with increase in resolution to a certain extent, consequently
face recognition results change with change in face image resolution
and provide optimal results when arriving at a certain resolution
level. In the proposed model of face recognition, initially image
decimation algorithm is applied on face image for dimension
reduction to a certain resolution level which provides best
recognition results. Due to increased computational speed and feature
extraction potential of Discrete Cosine Transform (DCT), it is
applied on face image. A subset of coefficients of DCT from low to
mid frequencies that represent the face adequately and provides best
recognition results is retained. A tradeoff between decimation factor,
number of DCT coefficients retained and recognition rate with
minimum computation is obtained. Preprocessing of the image is
carried out to increase its robustness against variations in poses and
illumination level. This new model has been tested on different
databases which include ORL , Yale and EME color database.
Abstract: As in other countries from Central and Eastern Europe,
the economic restructuring occurred in the last decade of the
twentieth century affected the mining industry in Romania, an
oversize and heavily subsidized sector before 1989. After more than
a decade since the beginning of mining restructuring, an evaluation
of current social implications of the process it is required, together
with an efficiency analysis of the adaptation mechanisms developed
at governmental level. This article aims to provide an insight into
these issues through case studies conducted in the most important
coal basin of Romania, Petroşani Depression.
Abstract: In the current research, neuro-fuzzy model and regression model was developed to predict Material Removal Rate in Electrical Discharge Machining process for AISI D2 tool steel with copper electrode. Extensive experiments were conducted with various levels of discharge current, pulse duration and duty cycle. The experimental data are split into two sets, one for training and the other for validation of the model. The training data were used to develop the above models and the test data, which was not used earlier to develop these models were used for validation the models. Subsequently, the models are compared. It was found that the predicted and experimental results were in good agreement and the coefficients of correlation were found to be 0.999 and 0.974 for neuro fuzzy and regression model respectively
Abstract: The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.
Abstract: A satured liquid is warmed until boiling in a parallelepipedic boiler. The heat is supplied in a liquid through the horizontal bottom of the boiler, the other walls being adiabatic. During the process of boiling, the liquid evaporates through its free surface by deforming it. This surface which subdivides the boiler into two regions occupied on both sides by the boiled liquid (broth) and its vapor which surmounts it. The broth occupying the region and its vapor the superior region. A two- fluids model is used to describe the dynamics of the broth, its vapor and their interface. In this model, the broth is treated as a monophasic fluid (homogeneous model) and form with its vapor adiphasic pseudo fluid (two-fluid model). Furthermore, the interface is treated as a zone of mixture characterized by superficial void fraction noted α* . The aim of this article is to describe the dynamics of the interface between the boiled fluid and its vapor within a boiler. The resolution of the problem allowed us to show the evolution of the broth and the level of the liquid.
Abstract: In a particular case of behavioural model reduction by ANNs, a validity domain shortening has been found. In mechanics, as in other domains, the notion of validity domain allows the engineer to choose a valid model for a particular analysis or simulation. In the study of mechanical behaviour for a cantilever beam (using linear and non-linear models), Multi-Layer Perceptron (MLP) Backpropagation (BP) networks have been applied as model reduction technique. This reduced model is constructed to be more efficient than the non-reduced model. Within a less extended domain, the ANN reduced model estimates correctly the non-linear response, with a lower computational cost. It has been found that the neural network model is not able to approximate the linear behaviour while it does approximate the non-linear behaviour very well. The details of the case are provided with an example of the cantilever beam behaviour modelling.
Abstract: Influence of octane and benzene on plant cell
ultrastructure and enzymes of basic metabolism, such as nitrogen
assimilation and energy generation have been studied. Different
plants: perennial ryegrass (Lolium perenne) and alfalfa (Medicago
sativa); crops- maize (Zea mays L.) and bean (Phaseolus vulgaris);
shrubs – privet (Ligustrum sempervirens) and trifoliate orange
(Poncirus trifoliate); trees - poplar (Populus deltoides) and white
mulberry (Morus alba L.) were exposed to hydrocarbons of different
concentrations (1, 10 and 100 mM). Destructive changes in bean and
maize leaves cells ultrastructure under the influence of benzene
vapour were revealed at the level of photosynthetic and energy
generation subcellular organells. Different deviations at the level of
subcellular organelles structure and distribution were observed in
alfalfa and ryegrass root cells under the influence of benzene and
octane, absorbed through roots. The level of destructive changes is
concentration dependent. Benzene at low 1 and 10 mM concentration
caused the increase in glutamate dehydrogenase (GDH) activity in
maize roots and leaves and in poplar and mulberry shoots, though to
higher extent in case of lower, 1mM concentration. The induction
was more intensive in plant roots. The highest tested 100mM
concentration of benzene was inhibitory to the enzyme in all plants.
Octane caused induction of GDH in all grassy plants at all tested
concentrations; however the rate of induction decreased parallel to
increase of the hydrocarbon concentration. Octane at concentration 1
mM caused induction of GDH in privet, trifoliate and white mulberry
shoots. The highest, 100mM octane was characterized by inhibitory
effect to GDH activity in all plants. Octane had inductive effect on
malate dehydrogenase in almost all plants and tested concentrations,
indicating the intensification of Trycarboxylic Acid Cycle.
The data could be suggested for elaboration of criteria for plant
selection for phytoremediation of oil hydrocarbons contaminated
soils.
Abstract: Basic objective of this study is to create a regression
analysis method that can estimate the length of a plastic hinge which
is an important design parameter, by making use of the outcomes of
(lateral load-lateral displacement hysteretic curves) the experimental
studies conducted for the reinforced square concrete columns. For
this aim, 170 different square reinforced concrete column tests results
have been collected from the existing literature. The parameters
which are thought affecting the plastic hinge length such as crosssection
properties, features of material used, axial loading level,
confinement of the column, longitudinal reinforcement bars in the
columns etc. have been obtained from these 170 different square
reinforced concrete column tests. In the study, when determining the
length of plastic hinge, using the experimental test results, a
regression analysis have been separately tested and compared with
each other. In addition, the outcome of mentioned methods on
determination of plastic hinge length of the reinforced concrete
columns has been compared to other methods available in the
literature.
Abstract: Given the motivation of maps impact in enhancing the
perception of the quality of life in a region, this work examines the
use of spatial analytical techniques in exploring the role of space in
shaping human development patterns in Assiut governorate.
Variations of human development index (HDI) of the governorate-s
villages, districts and cities are mapped using geographic information
systems (GIS). Global and local spatial autocorrelation measures are
employed to assess the levels of spatial dependency in the data and to
map clusters of human development. Results show prominent
disparities in HDI between regions of Assiut. Strong patterns of
spatial association were found proving the presence of clusters on the
distribution of HDI. Finally, the study indicates several "hot-spots" in
the governorate to be area of more investigations to explore the
attributes of such levels of human development. This is very
important for accomplishing the development plan of poorest regions
currently adopted in Egypt.
Abstract: Measurement of competitiveness between countries or regions is an important topic of many economic analysis and scientific papers. In European Union (EU), there is no mainstream approach of competitiveness evaluation and measuring. There are many opinions and methods of measurement and evaluation of competitiveness between states or regions at national and European level. The methods differ in structure of using the indicators of competitiveness and ways of their processing. The aim of the paper is to analyze main sources of competitive potential of the EU Member States with the help of Factor analysis (FA) and to classify the EU Member States to homogeneous units (clusters) according to the similarity of selected indicators of competitiveness factors by Cluster analysis (CA) in reference years 2000 and 2011. The theoretical part of the paper is devoted to the fundamental bases of competitiveness and the methodology of FA and CA methods. The empirical part of the paper deals with the evaluation of competitiveness factors in the EU Member States and cluster comparison of evaluated countries by cluster analysis.
Abstract: This paper unifies power optimization approaches in
various energy converters, such as: thermal, solar, chemical, and
electrochemical engines, in particular fuel cells. Thermodynamics
leads to converter-s efficiency and limiting power. Efficiency
equations serve to solve problems of upgrading and downgrading of
resources. While optimization of steady systems applies the
differential calculus and Lagrange multipliers, dynamic optimization
involves variational calculus and dynamic programming. In reacting
systems chemical affinity constitutes a prevailing component of an
overall efficiency, thus the power is analyzed in terms of an active
part of chemical affinity. The main novelty of the present paper in the
energy yield context consists in showing that the generalized heat
flux Q (involving the traditional heat flux q plus the product of
temperature and the sum products of partial entropies and fluxes of
species) plays in complex cases (solar, chemical and electrochemical)
the same role as the traditional heat q in pure heat engines.
The presented methodology is also applied to power limits in fuel
cells as to systems which are electrochemical flow engines propelled
by chemical reactions. The performance of fuel cells is determined by
magnitudes and directions of participating streams and mechanism of
electric current generation. Voltage lowering below the reversible
voltage is a proper measure of cells imperfection. The voltage losses,
called polarization, include the contributions of three main sources:
activation, ohmic and concentration. Examples show power maxima
in fuel cells and prove the relevance of the extension of the thermal
machine theory to chemical and electrochemical systems. The main
novelty of the present paper in the FC context consists in introducing
an effective or reduced Gibbs free energy change between products p
and reactants s which take into account the decrease of voltage and
power caused by the incomplete conversion of the overall reaction.
Abstract: Internet computer games turn to be more and more
attractive within the context of technology enhanced learning.
Educational games as quizzes and quests have gained significant
success in appealing and motivating learners to study in a different
way and provoke steadily increasing interest in new methods of
application. Board games are specific group of games where figures
are manipulated in competitive play mode with race conditions on a
surface according predefined rules. The article represents a new,
formalized model of traditional quizzes, puzzles and quests shown as
multimedia board games which facilitates the construction process of
such games. Authors provide different examples of quizzes and their
models in order to demonstrate the model is quite general and does
support not only quizzes, mazes and quests but also any set of
teaching activities. The execution process of such models is
explained and, as well, how they can be useful for creation and
delivery of adaptive e-learning courseware.
Abstract: In this paper back-propagation artificial neural
network (BPANN) with Levenberg–Marquardt algorithm is
employed to predict the limiting drawing ratio (LDR) of the deep
drawing process. To prepare a training set for BPANN, some finite
element simulations were carried out. die and punch radius, die arc
radius, friction coefficient, thickness, yield strength of sheet and
strain hardening exponent were used as the input data and the LDR
as the specified output used in the training of neural network. As a
result of the specified parameters, the program will be able to
estimate the LDR for any new given condition. Comparing FEM and
BPANN results, an acceptable correlation was found.
Abstract: In this work we study the reflection of circularly
polarised light from a nano-structured biological material found in
the exocuticle of scarabus beetles. This material is made of a stack
of ultra-thin (~5 nm) uniaxial layers arranged in a left-handed
helicoidal stack, which resonantly reflects circularly polarized light.
A chirp in the layer thickness combined with a finite absorption
coefficient produce a broad smooth reflectance spectrum. By
comparing model calculations and electron microscopy with
measured spectra we can explain our observations and quantify most
relevant structural parameters.
Abstract: In this paper we present an adaptive method for image
compression that is based on complexity level of the image. The
basic compressor/de-compressor structure of this method is a multilayer
perceptron artificial neural network. In adaptive approach
different Back-Propagation artificial neural networks are used as
compressor and de-compressor and this is done by dividing the
image into blocks, computing the complexity of each block and then
selecting one network for each block according to its complexity
value. Three complexity measure methods, called Entropy, Activity
and Pattern-based are used to determine the level of complexity in
image blocks and their ability in complexity estimation are evaluated
and compared. In training and evaluation, each image block is
assigned to a network based on its complexity value. Best-SNR is
another alternative in selecting compressor network for image blocks
in evolution phase which chooses one of the trained networks such
that results best SNR in compressing the input image block. In our
evaluations, best results are obtained when overlapping the blocks is
allowed and choosing the networks in compressor is based on the
Best-SNR. In this case, the results demonstrate superiority of this
method comparing with previous similar works and JPEG standard
coding.
Abstract: In today-s era of plasma and laser cutting, machines using oxy-acetylene flame are also meritorious due to their simplicity and cost effectiveness. The objective to devise a Computer controlled Oxy-Fuel profile cutting machine arose from the increasing demand for metal cutting with respect to edge quality, circularity and lesser formation of redeposit material. The System has an 8 bit micro controller based embedded system, which assures stipulated time response. A new window based Application software was devised which takes a standard CAD file .DXF as input and converts it into numerical data required for the controller. It uses VB6 as a front end whereas MS-ACCESS and AutoCAD as back end. The system is designed around AT89C51RD2, powerful 8 bit, ISP micro controller from Atmel and is optimized to achieve cost effectiveness and also maintains the required accuracy and reliability for complex shapes. The backbone of the system is a cleverly designed mechanical assembly along with the embedded system resulting in an accuracy of about 10 microns while maintaining perfect linearity in the cut. This results in substantial increase in productivity. The observed results also indicate reduced inter laminar spacing of pearlite with an increase in the hardness of the edge region.
Abstract: A new conceptual architecture for low-level neural
pattern recognition is presented. The key ideas are that the brain
implements support vector machines and that support vectors are
represented as memory patterns in competitive queuing memories. A
binary classifier is built from two competitive queuing memories
holding positive and negative valence training examples respectively.
The support vector machine classification function is calculated in
synchronized evaluation cycles. The kernel is computed by bisymmetric
feed-forward networks feed by sensory input and by
competitive queuing memories traversing the complete sequence of
support vectors. Temporary summation generates the output
classification. It is speculated that perception apparatus in the brain
reuses structures that have evolved for enabling fluent execution of
prepared action sequences so that pattern recognition is built on
internalized motor programmes.
Abstract: Sharing motivations of viral advertisements by
consumers and the impacts of these advertisements on the
perceptions for brand will be questioned in this study. Three
fundamental questions are answered in the study. These are
advertisement watching and sharing motivations of individuals,
criteria of liking viral advertisement and the impact of individual
attitudes for viral advertisement on brand perception respectively.
This study will be carried out via a viral advertisement which was
practiced in Turkey. The data will be collected by survey method and
the sample of the study consists of individuals who experienced the
practice of sample advertisement. Data will be collected by online
survey method and will be analyzed by using SPSS statistical
package program.
Recently traditional advertisement mind have been changing. New
advertising approaches which have significant impacts on consumers
have been argued. Viral advertising is a modernist advertisement
mind which offers significant advantages to brands apart from
traditional advertising channels such as television, radio and
magazines. Viral advertising also known as Electronic Word-of-
Mouth (eWOM) consists of free spread of convincing messages sent
by brands among interpersonal communication. When compared to
the traditional advertising, a more provocative thematic approach is
argued.
The foundation of this approach is to create advertisements that
are worth sharing with others by consumers. When that fact is taken
into consideration, in a manner of speaking it can also be stated that
viral advertising is media engineering.
The content worth sharing makes people being a volunteer
spokesman of a brand and strengthens the emotional bonds among
brand and consumer. Especially for some sectors in countries which
are having traditional advertising channel limitations, viral
advertising creates vital advantages.
Abstract: This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.
Abstract: The prevalence of non organic constipation differs
from country to country and the reliability of the estimate rates is
uncertain. Moreover, the clinical relevance of subdividing the
heterogeneous functional constipation disorders into pre-defined
subgroups is largely unknown.. Aim: to estimate the prevalence of
constipation in a population-based sample and determine whether
clinical subgroups can be identified. An age and gender stratified
sample population from 5 Italian cities was evaluated using a
previously validated questionnaire. Data mining by cluster analysis
was used to determine constipation subgroups. Results: 1,500
complete interviews were obtained from 2,083 contacted households
(72%). Self-reported constipation correlated poorly with symptombased
constipation found in 496 subjects (33.1%). Cluster analysis
identified four constipation subgroups which correlated to subgroups
identified according to pre-defined symptom criteria. Significant
differences in socio-demographics and lifestyle were observed
among subgroups.