Abstract: This paper aims to present a survey of object
recognition/classification methods based on image moments. We
review various types of moments (geometric moments, complex
moments) and moment-based invariants with respect to various
image degradations and distortions (rotation, scaling, affine
transform, image blurring, etc.) which can be used as shape
descriptors for classification. We explain a general theory how to
construct these invariants and show also a few of them in explicit
forms. We review efficient numerical algorithms that can be used
for moment computation and demonstrate practical examples of
using moment invariants in real applications.
Abstract: This paper focuses on reducing the power consumption
of wireless sensor networks. Therefore, a communication protocol
named LEACH (Low-Energy Adaptive Clustering Hierarchy) is modified.
We extend LEACHs stochastic cluster-head selection algorithm
by a modifying the probability of each node to become cluster-head
based on its required energy to transmit to the sink. We present
an efficient energy aware routing algorithm for the wireless sensor
networks. Our contribution consists in rotation selection of clusterheads
considering the remoteness of the nodes to the sink, and then,
the network nodes residual energy. This choice allows a best distribution
of the transmission energy in the network. The cluster-heads
selection algorithm is completely decentralized. Simulation results
show that the energy is significantly reduced compared with the
previous clustering based routing algorithm for the sensor networks.
Abstract: Case-Based Reasoning (CBR) is one of machine
learning algorithms for problem solving and learning that caught a lot
of attention over the last few years. In general, CBR is composed of
four main phases: retrieve the most similar case or cases, reuse the
case to solve the problem, revise or adapt the proposed solution, and
retain the learned cases before returning them to the case base for
learning purpose. Unfortunately, in many cases, this retain process
causes the uncontrolled case base growth. The problem affects
competence and performance of CBR systems. This paper proposes
competence-based maintenance method based on deletion policy
strategy for CBR. There are three main steps in this method. Step 1,
formulate problems. Step 2, determine coverage and reachability set
based on coverage value. Step 3, reduce case base size. The results
obtained show that this proposed method performs better than the
existing methods currently discussed in literature.
Abstract: Dilated cardiomyopathy (DCM) is a severe
cardiovascular disorder characterized by progressive systolic
dysfunction due to cardiac chamber dilatation and inefficient
myocardial contractility often leading to chronic heart failure.
Recently, a genome-wide association studies (GWASs) on DCM
indicate that the ZBTB17 gene rs10927875 single nucleotide
polymorphism is associated with DCM. The aim of the study was to
identify the distribution of ZBTB17 gene rs10927875 polymorphism
in 50 Slovak patients with DCM and 80 healthy control subjects
using the Custom Taqman®SNP Genotyping assays. Risk factors
detected at baseline in each group included age, sex, body mass
index, smoking status, diabetes and blood pressure. The mean age of
patients with DCM was 52.9±6.3 years; the mean age of individuals
in control group was 50.3±8.9 years. The distribution of investigated
genotypes of rs10927875 polymorphism within ZBTB17 gene in the
cohort of Slovak patients with DCM was as follows: CC (38.8%), CT
(55.1%), TT (6.1%), in controls: CC (43.8%), CT (51.2%), TT
(5.0%). The risk allele T was more common among the patients with
dilated cardiomyopathy than in normal controls (33.7% versus
30.6%). The differences in genotype or allele frequencies of ZBTB17
gene rs10927875 polymorphism were not statistically significant
(p=0.6908; p=0.6098). The results of this study suggest that ZBTB17
gene rs10927875 polymorphism may be a risk factor for
susceptibility to DCM in Slovak patients with DCM. Studies of
numerous files and additional functional investigations are needed to
fully understand the roles of genetic associations.
Abstract: This paper analyzes different techniques of the fine grained security of relational databases for the two variables-data accessibility and inference. Data accessibility measures the amount of data available to the users after applying a security technique on a table. Inference is the proportion of information leakage after suppressing a cell containing secret data. A row containing a secret cell which is suppressed can become a security threat if an intruder generates useful information from the related visible information of the same row. This paper measures data accessibility and inference associated with row, cell, and column level security techniques. Cell level security offers greatest data accessibility as it suppresses secret data only. But on the other hand, there is a high probability of inference in cell level security. Row and column level security techniques have least data accessibility and inference. This paper introduces cell plus innocent security technique that utilizes the cell level security method but suppresses some innocent data to dodge an intruder that a suppressed cell may not necessarily contain secret data. Four variations of the technique namely cell plus innocent 1/4, cell plus innocent 2/4, cell plus innocent 3/4, and cell plus innocent 4/4 respectively have been introduced to suppress innocent data equal to 1/4, 2/4, 3/4, and 4/4 percent of the true secret data inside the database. Results show that the new technique offers better control over data accessibility and inference as compared to the state-of-theart security techniques. This paper further discusses the combination of techniques together to be used. The paper shows that cell plus innocent 1/4, 2/4, and 3/4 techniques can be used as a replacement for the cell level security.
Abstract: Non-uniform current distribution in polymer
electrolyte membrane fuel cells results in local over-heating,
accelerated ageing, and lower power output than expected. This
issue is very critical when fuel cell experiences water flooding. In
this work, the performance of a PEM fuel cell is investigated under
cathode flooding conditions. Two-dimensional partially flooded
GDL models based on the conservation laws and electrochemical
relations are proposed to study local current density distributions
along flow fields over a wide range of cell operating conditions.
The model results show a direct association between cathode inlet
humidity increases and that of average current density but the
system becomes more sensitive to flooding. The anode inlet
relative humidity shows a similar effect. Operating the cell at
higher temperatures would lead to higher average current densities
and the chance of system being flooded is reduced. In addition,
higher cathode stoichiometries prevent system flooding but the
average current density remains almost constant. The higher anode
stoichiometry leads to higher average current density and higher
sensitivity to cathode flooding.
Abstract: Quantitative methods of economic decision-making as
the methodological base of the so called operational research
represent an important set of tools for managing complex economic
systems,both at the microeconomic level and on the macroeconomic
scale. Mathematical models of controlled and controlling processes
allow, by means of artificial experiments, obtaining information
foroptimalor optimum approaching managerial decision-making.The
quantitative methods of economic decision-making usually include a
methodology known as structural analysis -an analysisof
interdisciplinary production-consumption relations.
Abstract: Among neural models the Support Vector Machine
(SVM) solutions are attracting increasing attention, mostly because
they eliminate certain crucial questions involved by neural network
construction. The main drawback of standard SVM is its high
computational complexity, therefore recently a new technique, the
Least Squares SVM (LS–SVM) has been introduced. In this paper we
present an extended view of the Least Squares Support Vector
Regression (LS–SVR), which enables us to develop new
formulations and algorithms to this regression technique. Based on
manipulating the linear equation set -which embodies all information
about the regression in the learning process- some new methods are
introduced to simplify the formulations, speed up the calculations
and/or provide better results.
Abstract: This paper shows a traceability framework for supply risk monitoring, beginning with the identification, analysis, and evaluation of the supply chain risk and focusing on the supply operations of the Health Care Institutions with oncology services in Bogota, Colombia. It includes a brief presentation of the state of the art of the Supply Chain Risk Management and traceability systems in logistics operations, and it concludes with the methodology to integrate the SCRM model with the traceability system.
Abstract: Relational databases are often used as a basis for persistent storage of ontologies to facilitate rapid operations such as search and retrieval, and to utilize the benefits of relational databases management systems such as transaction management, security and integrity control. On the other hand, there appear more and more OWL files that contain ontologies. Therefore, this paper proposes to extract ontologies from OWL files and then store them in relational databases. A prerequisite for this storing is transformation of ontologies to relational databases, which is the purpose of this paper.
Abstract: Recently, Northeast Asia has become one of the three
largest trade areas, covering approximately 30% of the total trade
volume of the world. However, the distribution facilities are saturated
due to the increase in the transportation volume within the area and
with the European countries. In order to accommodate the increase of
the transportation volume, the transportation networking with the
major countries in Northeast Asia and Europe is absolutely necessary.
The Eurasian Logistics Network will develop into an international
passenger transportation network covering the Northeast Asian region
and an international freight transportation network connecting across
Eurasia Continent. This paper surveys the changes and trend of the
distribution network in the Eurasian Region according to the political,
economic and environmental changes of the region, analyses the
distribution network according to the changes in the transportation
policies of the related countries, and provides the direction of the
development of composite transportation on the basis of the present
conditions of transportation means. The transportation means optimal
for the efficiency of transportation system are suggested to be train
ferries, sea & rail or sea & rail & sea. It is suggested to develop
diversified composite transportation means and routes within the
boundary of international cooperation system.
Abstract: This paper reports a new and accurate method for load-flow solution of radial distribution networks with minimum data preparation. The node and branch numbering need not to be sequential like other available methods. The proposed method does not need sending-node, receiving-node and branch numbers if these are sequential. The proposed method uses the simple equation to compute the voltage magnitude and has the capability to handle composite load modelling. The proposed method uses the set of nodes of feeder, lateral(s) and sub lateral(s). The effectiveness of the proposed method is compared with other methods using two examples. The detailed load-flow results for different kind of load-modellings are also presented.
Abstract: This work concerns the evolution and the maintenance
of an ontological resource in relation with the evolution of the corpus
of texts from which it had been built.
The knowledge forming a text corpus, especially in dynamic domains,
is in continuous evolution. When a change in the corpus occurs, the
domain ontology must evolve accordingly. Most methods manage
ontology evolution independently from the corpus from which it is
built; in addition, they treat evolution just as a process of knowledge
addition, not considering other knowledge changes. We propose a
methodology for managing an evolving ontology from a text corpus
that evolves over time, while preserving the consistency and the
persistence of this ontology.
Our methodology is based on the changes made on the corpus to
reflect the evolution of the considered domain - augmented surgery
in our case. In this context, the results of text mining techniques,
as well as the ARCHONTE method slightly modified, are used to
support the evolution process.
Abstract: Proprietary sensor network systems are typically expensive, rigid and difficult to incorporate technologies from other vendors. When using competing and incompatible technologies, a non-proprietary system is complex to create because it requires significant technical expertise and effort, which can be more expensive than a proprietary product. This paper presents the Sensor Abstraction Layer (SAL) that provides middleware architectures with a consistent and uniform view of heterogeneous sensor networks, regardless of the technologies involved. SAL abstracts and hides the hardware disparities and specificities related to accessing, controlling, probing and piloting heterogeneous sensors. SAL is a single software library containing a stable hardware-independent interface with consistent access and control functions to remotely manage the network. The end-user has near-real-time access to the collected data via the network, which results in a cost-effective, flexible and simplified system suitable for novice users. SAL has been used for successfully implementing several low-cost sensor network systems.
Abstract: Revolutions Applications such as telecommunications, hands-free communications, recording, etc. which need at least one microphone, the signal is usually infected by noise and echo. The important application is the speech enhancement, which is done to remove suppressed noises and echoes taken by a microphone, beside preferred speech. Accordingly, the microphone signal has to be cleaned using digital signal processing DSP tools before it is played out, transmitted, or stored. Engineers have so far tried different approaches to improving the speech by get back the desired speech signal from the noisy observations. Especially Mobile communication, so in this paper will do reconstruction of the speech signal, observed in additive background noise, using the Kalman filter technique to estimate the parameters of the Autoregressive Process (AR) in the state space model and the output speech signal obtained by the MATLAB. The accurate estimation by Kalman filter on speech would enhance and reduce the noise then compare and discuss the results between actual values and estimated values which produce the reconstructed signals.
Abstract: This paper presents a numerical approach for the static
and dynamic analysis of hydrodynamic radial journal bearings. In the
first part, the effect of shaft and housing deformability on pressure
distribution within oil film is investigated. An iterative algorithm that
couples Reynolds equation with a plane finite elements (FE)
structural model is solved. Viscosity-to-pressure dependency (Vogel-
Barus equation) is also included. The deformed lubrication gap and
the overall stress state are obtained. Numerical results are presented
with reference to a typical journal bearing configuration at two
different inlet oil temperatures. Obtained results show the great
influence of bearing components structural deformation on oil
pressure distribution, compared with results for ideally rigid
components. In the second part, a numerical approach based on
perturbation method is used to compute stiffness and damping
matrices, which characterize the journal bearing dynamic behavior.
Abstract: Bacterial cellulose, a biopolysaccharide, is produced by the bacterium, Gluconacetobacter xylinus. Static batch fermentation for bacterial cellulose production was studied in sucrose and date syrup solutions (Bx. 10%) at 28 °C using G. xylinus (PTCC, 1734). Results showed that the maximum yields of bacterial cellulose (BC) were 4.35 and 1.69 g/l00 ml for date syrup and sucrose medium after 336 hours fermentation period, respectively. Comparison of FTIR spectrum of cellulose with BC indicated appropriate coincidence which proved that the component produced by G. xylinus was cellulose. Determination of the area under X-ray diffractometry patterns demonstrated that the crystallinity amount of cellulose (83.61%) was more than that for the BC (60.73%). The scanning electron microscopy imaging of BC and cellulose were carried out in two magnifications of 1 and 6K. Results showed that the diameter ratio of BC to cellulose was approximately 1/30 which indicated more delicacy of BC fibers relative to cellulose.
Abstract: In this paper three different approaches for person
verification and identification, i.e. by means of fingerprints, face and
voice recognition, are studied. Face recognition uses parts-based
representation methods and a manifold learning approach. The
assessment criterion is recognition accuracy. The techniques under
investigation are: a) Local Non-negative Matrix Factorization
(LNMF); b) Independent Components Analysis (ICA); c) NMF with
sparse constraints (NMFsc); d) Locality Preserving Projections
(Laplacianfaces). Fingerprint detection was approached by classical
minutiae (small graphical patterns) matching through image
segmentation by using a structural approach and a neural network as
decision block. As to voice / speaker recognition, melodic cepstral
and delta delta mel cepstral analysis were used as main methods, in
order to construct a supervised speaker-dependent voice recognition
system. The final decision (e.g. “accept-reject" for a verification
task) is taken by using a majority voting technique applied to the
three biometrics. The preliminary results, obtained for medium
databases of fingerprints, faces and voice recordings, indicate the
feasibility of our study and an overall recognition precision (about
92%) permitting the utilization of our system for a future complex
biometric card.
Abstract: In this paper, a frequency-variation based method has
been proposed for transistor parameter estimation in a commonemitter
transistor amplifier circuit. We design an algorithm to estimate
the transistor parameters, based on noisy measurements of the output
voltage when the input voltage is a sine wave of variable frequency
and constant amplitude. The common emitter amplifier circuit has
been modelled using the transistor Ebers-Moll equations and the
perturbation technique has been used for separating the linear and
nonlinear parts of the Ebers-Moll equations. This model of the amplifier
has been used to determine the amplitude of the output sinusoid as
a function of the frequency and the parameter vector. Then, applying
the proposed method to the frequency components, the transistor
parameters have been estimated. As compared to the conventional
time-domain least squares method, the proposed method requires
much less data storage and it results in more accurate parameter
estimation, as it exploits the information in the time and frequency
domain, simultaneously. The proposed method can be utilized for
parameter estimation of an analog device in its operating range of
frequencies, as it uses data collected from different frequencies output
signals for parameter estimation.
Abstract: Markov games are a generalization of Markov
decision process to a multi-agent setting. Two-player zero-sum
Markov game framework offers an effective platform for designing
robust controllers. This paper presents two novel controller design
algorithms that use ideas from game-theory literature to produce
reliable controllers that are able to maintain performance in presence
of noise and parameter variations. A more widely used approach for
controller design is the H∞ optimal control, which suffers from high
computational demand and at times, may be infeasible. Our approach
generates an optimal control policy for the agent (controller) via a
simple Linear Program enabling the controller to learn about the
unknown environment. The controller is facing an unknown
environment, and in our formulation this environment corresponds to
the behavior rules of the noise modeled as the opponent. Proposed
controller architectures attempt to improve controller reliability by a
gradual mixing of algorithmic approaches drawn from the game
theory literature and the Minimax-Q Markov game solution
approach, in a reinforcement-learning framework. We test the
proposed algorithms on a simulated Inverted Pendulum Swing-up
task and compare its performance against standard Q learning.