Abstract: The literature reports a large number of approaches for
measuring the similarity between protein sequences. Most of these
approaches estimate this similarity using alignment-based techniques
that do not necessarily yield biologically plausible results, for two
reasons.
First, for the case of non-alignable (i.e., not yet definitively aligned
and biologically approved) sequences such as multi-domain, circular
permutation and tandem repeat protein sequences, alignment-based
approaches do not succeed in producing biologically plausible results.
This is due to the nature of the alignment, which is based on the
matching of subsequences in equivalent positions, while non-alignable
proteins often have similar and conserved domains in non-equivalent
positions.
Second, the alignment-based approaches lead to similarity measures
that depend heavily on the parameters set by the user for the alignment
(e.g., gap penalties and substitution matrices). For easily alignable
protein sequences, it's possible to supply a suitable combination of
input parameters that allows such an approach to yield biologically
plausible results. However, for difficult-to-align protein sequences,
supplying different combinations of input parameters yields different
results. Such variable results create ambiguities and complicate the
similarity measurement task.
To overcome these drawbacks, this paper describes a novel and
effective approach for measuring the similarity between protein
sequences, called SAF for Substitution and Alignment Free. Without
resorting either to the alignment of protein sequences or to substitution
relations between amino acids, SAF is able to efficiently detect the
significant subsequences that best represent the intrinsic properties of
protein sequences, those underlying the chronological dependencies of
structural features and biochemical activities of protein sequences.
Moreover, by using a new efficient subsequence matching scheme,
SAF more efficiently handles protein sequences that contain similar
structural features with significant meaning in chronologically
non-equivalent positions. To show the effectiveness of SAF, extensive
experiments were performed on protein datasets from different
databases, and the results were compared with those obtained by
several mainstream algorithms.
Abstract: One of the major, difficult tasks in automated video
surveillance is the segmentation of relevant objects in the scene.
Current implementations often yield inconsistent results on average
from frame to frame when trying to differentiate partly occluding
objects. This paper presents an efficient block-based segmentation
algorithm which is capable of separating partly occluding objects and
detecting shadows. It has been proven to perform in real time with a
maximum duration of 47.48 ms per frame (for 8x8 blocks on a
720x576 image) with a true positive rate of 89.2%. The flexible
structure of the algorithm enables adaptations and improvements with
little effort. Most of the parameters correspond to relative differences
between quantities extracted from the image and should therefore not
depend on scene and lighting conditions. Thus presenting a
performance oriented segmentation algorithm which is applicable in
all critical real time scenarios.
Abstract: Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.
Abstract: Non-viral gene carriers composed of biodegradable
polymers or lipids have been considered as a safer alternative for gene
carriers over viral vectors. We have developed multi-functional
nano-micelles for both drug and gene delivery application.
Polyethyleneimine (PEI) was modified by grafting stearic acid (SA)
and formulated to polymeric micelles (PEI-SA) with positive surface
charge for gene and drug delivery. Our results showed that PEI-SA
micelles provided high siRNA binding efficiency. In addition, siRNA
delivered by PEI-SA carriers also demonstrated significantly high
cellular uptake even in the presence of serum proteins. The
post-transcriptional gene silencing efficiency was greatly improved by
the polyplex formulated by 10k PEI-SA/siRNA. The amphiphilic
structure of PEI-SA micelles provided advantages for multifunctional
tasks; where the hydrophilic shell modified with cationic charges can
electrostatically interact with DNA or siRNA, and the hydrophobic
core can serve as payloads for hydrophobic drugs, making it a
promising multifunctional vehicle for both genetic and chemotherapy
application.
Abstract: EDF (Early Deadline First) algorithm is a very important scheduling algorithm for real- time systems . The EDF algorithm assigns priorities to each job according to their absolute deadlines and has good performance when the real-time system is not overloaded. When the real-time system is overloaded, many misdeadlines will be produced. But these misdeadlines are not uniformly distributed, which usually focus on some tasks. In this paper, we present an adaptive fuzzy control scheduling based on EDF algorithm. The improved algorithm can have a rectangular distribution of misdeadline ratios among all real-time tasks when the system is overloaded. To evaluate the effectiveness of the improved algorithm, we have done extensive simulation studies. The simulation results show that the new algorithm is superior to the old algorithm.
Abstract: Human identification at a distance has recently gained
growing interest from computer vision researchers. Gait recognition
aims essentially to address this problem by identifying people based
on the way they walk [1]. Gait recognition has 3 steps. The first step
is preprocessing, the second step is feature extraction and the third
one is classification. This paper focuses on the classification step that
is essential to increase the CCR (Correct Classification Rate).
Multilayer Perceptron (MLP) is used in this work. Neural Networks
imitate the human brain to perform intelligent tasks [3].They can
represent complicated relationships between input and output and
acquire knowledge about these relationships directly from the data
[2]. In this paper we apply MLP NN for 11 views in our database and
compare the CCR values for these views. Experiments are performed
with the NLPR databases, and the effectiveness of the proposed
method for gait recognition is demonstrated.
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: Trust management and Reputation models are
becoming integral part of Internet based applications such as CSCW,
E-commerce and Grid Computing. Also the trust dimension is a
significant social structure and key to social relations within a
collaborative community. Collaborative Decision Making (CDM) is
a difficult task in the context of distributed environment (information
across different geographical locations) and multidisciplinary
decisions are involved such as Virtual Organization (VO). To aid
team decision making in VO, Decision Support System and social
network analysis approaches are integrated. In such situations social
learning helps an organization in terms of relationship, team
formation, partner selection etc. In this paper we focus on trust
learning. Trust learning is an important activity in terms of
information exchange, negotiation, collaboration and trust
assessment for cooperation among virtual team members. In this
paper we have proposed a reinforcement learning which enhances the
trust decision making capability of interacting agents during
collaboration in problem solving activity. Trust computational model
with learning that we present is adapted for best alternate selection of
new project in the organization. We verify our model in a multi-agent
simulation where the agents in the community learn to identify
trustworthy members, inconsistent behavior and conflicting behavior
of agents.
Abstract: The object of this research is the design and
evaluation of an immersive Virtual Learning Environment (VLE) for
deaf children. Recently we have developed a prototype immersive
VR game to teach sign language mathematics to deaf students age K-
4 [1] [2]. In this paper we describe a significant extension of the
prototype application. The extension includes: (1) user-centered
design and implementation of two additional interactive
environments (a clock store and a bakery), and (2) user-centered
evaluation including development of user tasks, expert panel-based
evaluation, and formative evaluation. This paper is one of the few to
focus on the importance of user-centered, iterative design in VR
application development, and to describe a structured evaluation
method.
Abstract: This paper describes an optimal approach for feature
subset selection to classify the leaves based on Genetic Algorithm
(GA) and Kernel Based Principle Component Analysis (KPCA). Due
to high complexity in the selection of the optimal features, the
classification has become a critical task to analyse the leaf image
data. Initially the shape, texture and colour features are extracted
from the leaf images. These extracted features are optimized through
the separate functioning of GA and KPCA. This approach performs
an intersection operation over the subsets obtained from the
optimization process. Finally, the most common matching subset is
forwarded to train the Support Vector Machine (SVM). Our
experimental results successfully prove that the application of GA
and KPCA for feature subset selection using SVM as a classifier is
computationally effective and improves the accuracy of the classifier.
Abstract: Personal name matching system is the core of
essential task in national citizen database, text and web mining,
information retrieval, online library system, e-commerce and record
linkage system. It has necessitated to the all embracing research in
the vicinity of name matching. Traditional name matching methods
are suitable for English and other Latin based language. Asian
languages which have no word boundary such as Myanmar language
still requires sounds alike matching system in Unicode based
application. Hence we proposed matching algorithm to get analogous
sounds alike (phonetic) pattern that is convenient for Myanmar
character spelling. According to the nature of Myanmar character, we
consider for word boundary fragmentation, collation of character.
Thus we use pattern conversion algorithm which fabricates words in
pattern with fragmented and collated. We create the Myanmar sounds
alike phonetic group to help in the phonetic matching. The
experimental results show that fragmentation accuracy in 99.32% and
processing time in 1.72 ms.
Abstract: The presented article deals with the description of a
numerical model of a corridor at a Central Interim Spent Fuel Storage
Facility (hereinafter CISFSF). The model takes into account the
effect of air flows on the temperature of stored waste. The
computational model was implemented in the ANSYS/CFX
programming environment in the form of a CFD task solution, which
was compared with an approximate analytical calculation. The article
includes a categorization of the individual alternatives for the
ventilation of such underground systems. The aim was to evaluate a
ventilation system for a CISFSF with regard to its stability and
capacity to provide sufficient ventilation for the removal of heat
produced by stored casks with spent nuclear fuel.
Abstract: This study examines whether contrived success on a
task closely related to school subjects would promote students-
self-efficacy. In our previous study, junior high school students who
experienced contrived success on anagram tasks raised their sense of
self-efficacy and kept it high for a year.We tried to replicate that study,
substituting calculation tasks for the anagrams. One hundred eighteen
junior high school students participated in this study, 18 of whom were
surreptitiously given easier tasks than their classmates. Those students
with easier tasks outperformed their peers and thereby raised their
sense of self-efficacy. However, elevated self-efficacy did not persist,
falling to the starting level after only three months.
Abstract: The designing of charge pump with high gain Op-
Amp is a challenging task for getting faithful response .Design of
high performance phase locked loop require ,a design of high
performance charge pump .We have designed a operational amplifier
for reducing the error caused by high speed glitch in a transistor and
mismatch currents . A separate Op-Amp has designed in 180 nm
CMOS technology by CADENCE VIRTUOSO tool. This paper
describes the design of high performance charge pump for GHz
CMOS PLL targeting orthogonal frequency division multiplexing
(OFDM) application. A high speed low power consumption Op-Amp
with more than 500 MHz bandwidth has designed for increasing the
speed of charge pump in Phase locked loop.
Abstract: A real time distributed computing has
heterogeneously networked computers to solve a single problem. So
coordination of activities among computers is a complex task and
deadlines make more complex. The performances depend on many
factors such as traffic workloads, database system architecture,
underlying processors, disks speeds, etc. Simulation study have been
performed to analyze the performance under different transaction
scheduling: different workloads, arrival rate, priority policies,
altering slack factors and Preemptive Policy. The performance metric
of the experiments is missed percent that is the percentage of
transaction that the system is unable to complete. The throughput of
the system is depends on the arrival rate of transaction. The
performance can be enhanced with altering the slack factor value.
Working on slack value for the transaction can helps to avoid some
of transactions from killing or aborts. Under the Preemptive Policy,
many extra executions of new transactions can be carried out.
Abstract: In this paper a low cost knowledge base system (KBS)
framework is proposed for design of deep drawing die and procedure
for developing system modules. The task of building the system is
structured into different modules for major activities of design of
deep drawing die. A manufacturability assessment module of the
proposed framework is developed to check the manufacturability of
deep drawn parts. The technological knowledge is represented by
using IF- THEN rules and it is coded in AutoLISP language. The
module is designed to be loaded into the prompt area of AutoCAD.
The cost of implementation of proposed system makes it affordable
for small and medium scale sheet metal industries.
Abstract: Estimation time and cost of work completion in a
project and follow up them during execution are contributors to
success or fail of a project, and is very important for project
management team. Delivering on time and within budgeted cost
needs to well managing and controlling the projects. To dealing with
complex task of controlling and modifying the baseline project
schedule during execution, earned value management systems have
been set up and widely used to measure and communicate the real
physical progress of a project. But it often fails to predict the total
duration of the project. In this paper data mining techniques is used
predicting the total project duration in term of Time Estimate At
Completion-EAC (t). For this purpose, we have used a project with
90 activities, it has updated day by day. Then, it is used regular
indexes in literature and applied Earned Duration Method to
calculate time estimate at completion and set these as input data for
prediction and specifying the major parameters among them using
Clem software. By using data mining, the effective parameters on
EAC and the relationship between them could be extracted and it is
very useful to manage a project with minimum delay risks. As we
state, this could be a simple, safe and applicable method in prediction
the completion time of a project during execution.
Abstract: Segmentation and quantification of stenosis is an
important task in assessing coronary artery disease. One of the main
challenges is measuring the real diameter of curved vessels.
Moreover, uncertainty in segmentation of different tissues in the
narrow vessel is an important issue that affects accuracy. This paper
proposes an algorithm to extract coronary arteries and measure the
degree of stenosis. Markovian fuzzy clustering method is applied to
model uncertainty arises from partial volume effect problem. The
algorithm employs: segmentation, centreline extraction, estimation of
orthogonal plane to centreline, measurement of the degree of
stenosis. To evaluate the accuracy and reproducibility, the approach
has been applied to a vascular phantom and the results are compared
with real diameter. The results of 10 patient datasets have been
visually judged by a qualified radiologist. The results reveal the
superiority of the proposed method compared to the Conventional
thresholding Method (CTM) on both datasets.
Abstract: Quality evaluation of an image is an important task in image processing applications. In case of image compression, quality of decompressed image is also the criterion for evaluation of given coding scheme. In the process of compression -decompression various artifacts such as blocking artifacts, blur artifact, ringing or edge artifact are observed. However quantification of these artifacts is a difficult task. We propose here novel method to quantify blur and ringing artifact in an image.
Abstract: Array signal processing involves signal enumeration and source localization. Array signal processing is centered on the ability to fuse temporal and spatial information captured via sampling signals emitted from a number of sources at the sensors of an array in order to carry out a specific estimation task: source characteristics (mainly localization of the sources) and/or array characteristics (mainly array geometry) estimation. Array signal processing is a part of signal processing that uses sensors organized in patterns or arrays, to detect signals and to determine information about them. Beamforming is a general signal processing technique used to control the directionality of the reception or transmission of a signal. Using Beamforming we can direct the majority of signal energy we receive from a group of array. Multiple signal classification (MUSIC) is a highly popular eigenstructure-based estimation method of direction of arrival (DOA) with high resolution. This Paper enumerates the effect of missing sensors in DOA estimation. The accuracy of the MUSIC-based DOA estimation is degraded significantly both by the effects of the missing sensors among the receiving array elements and the unequal channel gain and phase errors of the receiver.