Abstract: The P-Bigram method is a string comparison methods
base on an internal two characters-based similarity measure. The edit
distance between two strings is the minimal number of elementary
editing operations required to transform one string into the other. The
elementary editing operations include deletion, insertion, substitution
two characters. In this paper, we address the P-Bigram method to
sole the similarity problem in DNA sequence. This method provided
an efficient algorithm that locates all minimum operation in a string.
We have been implemented algorithm and found that our program
calculated that smaller distance than one string. We develop PBigram
edit distance and show that edit distance or the similarity and
implementation using dynamic programming. The performance of
the proposed approach is evaluated using number edit and percentage
similarity measures.
Abstract: Recent years have seen a growing trend towards the
integration of multiple information sources to support large-scale
prediction of protein-protein interaction (PPI) networks in model
organisms. Despite advances in computational approaches, the
combination of multiple “omic" datasets representing the same type
of data, e.g. different gene expression datasets, has not been
rigorously studied. Furthermore, there is a need to further investigate
the inference capability of powerful approaches, such as fullyconnected
Bayesian networks, in the context of the prediction of PPI
networks. This paper addresses these limitations by proposing a
Bayesian approach to integrate multiple datasets, some of which
encode the same type of “omic" data to support the identification of
PPI networks. The case study reported involved the combination of
three gene expression datasets relevant to human heart failure (HF).
In comparison with two traditional methods, Naive Bayesian and
maximum likelihood ratio approaches, the proposed technique can
accurately identify known PPI and can be applied to infer potentially
novel interactions.
Abstract: The multidelays linear control systems described by
difference differential equations are often studied in modern control
theory. In this paper, the delay-independent stabilization algebraic
criteria and the theorem of delay-independent stabilization for linear
systems with multiple time-delays are established by using the
Lyapunov functional and the Riccati algebra matrix equation in the
matrix theory. An illustrative example and the simulation result, show
that the approach to linear systems with multiple time-delays is
effective.
Abstract: Most Decision Support Systems (DSS) for waste
management (WM) constructed are not widely marketed and lack
practical applications. This is due to the number of variables and
complexity of the mathematical models which include the
assumptions and constraints required in decision making. The
approach made by many researchers in DSS modelling is to isolate a
few key factors that have a significant influence to the DSS. This
segmented approach does not provide a thorough understanding of
the complex relationships of the many elements involved. The
various elements in constructing the DSS must be integrated and
optimized in order to produce a viable model that is marketable and
has practical application. The DSS model used in assisting decision
makers should be integrated with GIS, able to give robust prediction
despite the inherent uncertainties of waste generation and the plethora
of waste characteristics, and gives optimal allocation of waste stream
for recycling, incineration, landfill and composting.
Abstract: In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.
Abstract: Grid computing provides an effective infrastructure for massive computation among flexible and dynamic collection of individual system for resource discovery. The major challenge for grid computing is to prevent breaches and secure the data from trespassers. To overcome such conflicts a semantic approach can be designed which will filter the access requests of peers by checking the resource description specifying the data and the metadata as factual statements. Between every node in the grid a semantic firewall as a middleware will be present The intruder will be required to present an application specifying there needs to the firewall and hence accordingly the system will grant or deny the application request.
Abstract: Identifying the nature of protein-nanoparticle
interactions and favored binding sites is an important issue in
functional characterization of biomolecules and their physiological
responses. Herein, interaction of silver nanoparticles with lysozyme
as a model protein has been monitored via fluorescence spectroscopy.
Formation of complex between the biomolecule and silver
nanoparticles (AgNPs) induced a steady state reduction in the
fluorescence intensity of protein at different concentrations of
nanoparticles. Tryptophan fluorescence quenching spectra suggested
that silver nanoparticles act as a foreign quencher, approaching the
protein via this residue. Analysis of the Stern-Volmer plot showed
quenching constant of 3.73 μM−1. Moreover, a single binding site in
lysozyme is suggested to play role during interaction with AgNPs,
having low affinity of binding compared to gold nanoparticles.
Unfolding studies of lysozyme showed that complex of lysozyme-
AgNPs has not undergone structural perturbations compared to the
bare protein. Results of this effort will pave the way for utilization of
sensitive spectroscopic techniques for rational design of
nanobiomaterials in biomedical applications.
Abstract: Many exist studies always use Markov decision
processes (MDPs) in modeling optimal route choice in
stochastic, time-varying networks. However, taking many
variable traffic data and transforming them into optimal route
decision is a computational challenge by employing MDPs in
real transportation networks. In this paper we model finite
horizon MDPs using directed hypergraphs. It is shown that the
problem of route choice in stochastic, time-varying networks
can be formulated as a minimum cost hyperpath problem, and
it also can be solved in linear time. We finally demonstrate the
significant computational advantages of the introduced
methods.
Abstract: Nowadays, Gene Ontology has been used widely by many researchers for biological data mining and information retrieval, integration of biological databases, finding genes, and incorporating knowledge in the Gene Ontology for gene clustering. However, the increase in size of the Gene Ontology has caused problems in maintaining and processing them. One way to obtain their accessibility is by clustering them into fragmented groups. Clustering the Gene Ontology is a difficult combinatorial problem and can be modeled as a graph partitioning problem. Additionally, deciding the number k of clusters to use is not easily perceived and is a hard algorithmic problem. Therefore, an approach for solving the automatic clustering of the Gene Ontology is proposed by incorporating cohesion-and-coupling metric into a hybrid algorithm consisting of a genetic algorithm and a split-and-merge algorithm. Experimental results and an example of modularized Gene Ontology in RDF/XML format are given to illustrate the effectiveness of the algorithm.
Abstract: In its attempt to offer new ways into autonomy for a
large population of disabled people, assistive technology has largely
been inspired by robotics engineering. Recent human-like robots
carry new hopes that it seems to us necessary to analyze by means of
a specific theory of anthropomorphism. We propose to distinguish a
functional anthropomorphism which is the one of actual wheelchairs
from a structural anthropomorphism based on a mimicking of human
physiological systems. If functional anthropomorphism offers the
main advantage of eliminating the physiological systems
interdependence issue, the highly link between the robot for disabled
people and their human-built environment would lead to privilege in
the future the anthropomorphic structural way. In this future
framework, we highlight a general interdependence principle : any
partial or local structural anthropomorphism generates new
anthropomorphic needs due to the physiological systems
interdependency, whose effects can be evaluated by means of
specific anthropomorphic criterions derived from a set theory-based
approach of physiological systems.
Abstract: In this paper, we present a new algorithm for clustering data in large datasets using image processing approaches. First the dataset is mapped into a binary image plane. The synthesized image is then processed utilizing efficient image processing techniques to cluster the data in the dataset. Henceforth, the algorithm avoids exhaustive search to identify clusters. The algorithm considers only a small set of the data that contains critical boundary information sufficient to identify contained clusters. Compared to available data clustering techniques, the proposed algorithm produces similar quality results and outperforms them in execution time and storage requirements.
Abstract: The one-class support vector machine “support vector
data description” (SVDD) is an ideal approach for anomaly or outlier
detection. However, for the applicability of SVDD in real-world
applications, the ease of use is crucial. The results of SVDD are
massively determined by the choice of the regularisation parameter C
and the kernel parameter of the widely used RBF kernel. While for
two-class SVMs the parameters can be tuned using cross-validation
based on the confusion matrix, for a one-class SVM this is not
possible, because only true positives and false negatives can occur
during training. This paper proposes an approach to find the optimal
set of parameters for SVDD solely based on a training set from
one class and without any user parameterisation. Results on artificial
and real data sets are presented, underpinning the usefulness of the
approach.
Abstract: Due to today-s fierce competition, companies have to
be proactive creators of the future by effectively developing
innovations. Especially radical innovations allow high profit margins
– but they also entail high risks. One possibility to realize radical
innovations and reduce the risk of failure is cross-industry innovation
(CII). CII brings together problems and solution ideas from different
industries. However, there is a lack of systematic ways towards CII.
Bridging this gap, the present paper provides a systematic approach
towards planned CII. Starting with the analysis of potentials, the
definition of promising search strategies is crucial. Subsequently,
identified solution ideas need to be assessed. For the most promising
ones, the adaption process has to be systematically planned –
regarding the risk affinity of a company. The introduced method is
explained on a project from the furniture industry.
Abstract: A novel PDE solver using the multidimensional wave
digital filtering (MDWDF) technique to achieve the solution of a 2D
seismic wave system is presented. In essence, the continuous physical
system served by a linear Kirchhoff circuit is transformed to an
equivalent discrete dynamic system implemented by a MD wave
digital filtering (MDWDF) circuit. This amounts to numerically
approximating the differential equations used to describe elements of a
MD passive electronic circuit by a grid-based difference equations
implemented by the so-called state quantities within the passive
MDWDF circuit. So the digital model can track the wave field on a
dense 3D grid of points. Details about how to transform the continuous
system into a desired discrete passive system are addressed. In
addition, initial and boundary conditions are properly embedded into
the MDWDF circuit in terms of state quantities. Graphic results have
clearly demonstrated some physical effects of seismic wave (P-wave
and S–wave) propagation including radiation, reflection, and
refraction from and across the hard boundaries. Comparison between
the MDWDF technique and the finite difference time domain (FDTD)
approach is also made in terms of the computational efficiency.
Abstract: The performance of adaptive beamforming degrades
substantially in the presence of steering vector mismatches. This
degradation is especially severe in the near-field, for the
3-dimensional source location is more difficult to estimate than the
2-dimensional direction of arrival in far-field cases. As a solution, a
novel approach of near-field robust adaptive beamforming (RABF) is
proposed in this paper. It is a natural extension of the traditional
far-field RABF and belongs to the class of diagonal loading
approaches, with the loading level determined based on worst-case
performance optimization. However, different from the methods
solving the optimal loading by iteration, it suggests here a simple
closed-form solution after some approximations, and consequently,
the optimal weight vector can be expressed in a closed form. Besides
simplicity and low computational cost, the proposed approach reveals
how different factors affect the optimal loading as well as the weight
vector. Its excellent performance in the near-field is confirmed via a
number of numerical examples.
Abstract: Data envelopment analysis (DEA) has gained great popularity in environmental performance measurement because it can provide a synthetic standardized environmental performance index when pollutants are suitably incorporated into the traditional DEA framework. Since some of the environmental performance indicators cannot be controlled by companies managers, it is necessary to develop the model in a way that it could be applied when discretionary and/or non-discretionary factors were involved. In this paper, we present a semi-radial DEA approach to measuring environmental performance, which consists of non-discretionary factors. The model, then, has been applied on a real case.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: With the advent of inexpensive 32 bit floating point digital signal processor-s availability in market, many computationally intensive algorithms such as Kalman filter becomes feasible to implement in real time. Dynamic simulation of a self excited DC motor using second order state variable model and implementation of Kalman Filter in a floating point DSP TMS320C6713 is presented in this paper with an objective to introduce and implement such an algorithm, for beginners. A fractional hp DC motor is simulated in both Matlab® and DSP and the results are included. A step by step approach for simulation of DC motor in Matlab® and “C" routines in CC Studio® is also given. CC studio® project file details and environmental setting requirements are addressed. This tutorial can be used with 6713 DSK, which is based on floating point DSP and CC Studio either in hardware mode or in simulation mode.
Abstract: Every organization is continually subject to new damages and threats which can be resulted from their operations or their goal accomplishment. Methods of providing the security of space and applied tools have been widely changed with increasing application and development of information technology (IT). From this viewpoint, information security management systems were evolved to construct and prevent reiterating the experienced methods. In general, the correct response in information security management systems requires correct decision making, which in turn requires the comprehensive effort of managers and everyone involved in each plan or decision making. Obviously, all aspects of work or decision are not defined in all decision making conditions; therefore, the possible or certain risks should be considered when making decisions. This is the subject of risk management and it can influence the decisions. Investigation of different approaches in the field of risk management demonstrates their progress from quantitative to qualitative methods with a process approach.
Abstract: Knowledge sharing in general and the contextual
access to knowledge in particular, still represent a key challenge in
the knowledge management framework. Researchers on semantic
web and human machine interface study techniques to enhance this
access. For instance, in semantic web, the information retrieval is
based on domain ontology. In human machine interface, keeping
track of user's activity provides some elements of the context that can
guide the access to information. We suggest an approach based on
these two key guidelines, whilst avoiding some of their weaknesses.
The approach permits a representation of both the context and the
design rationale of a project for an efficient access to knowledge. In
fact, the method consists of an information retrieval environment
that, in the one hand, can infer knowledge, modeled as a semantic
network, and on the other hand, is based on the context and the
objectives of a specific activity (the design). The environment we
defined can also be used to gather similar project elements in order to
build classifications of tasks, problems, arguments, etc. produced in a
company. These classifications can show the evolution of design
strategies in the company.