Abstract: Current advancements in nanotechnology are dependent
on the capabilities that can enable nano-scientists to extend their eyes
and hands into the nano-world. For this purpose, a haptics (devices
capable of recreating tactile or force sensations) based system for
AFM (Atomic Force Microscope) is proposed. The system enables
the nano-scientists to touch and feel the sample surfaces, viewed
through AFM, in order to provide them with better understanding of
the physical properties of the surface, such as roughness, stiffness and
shape of molecular architecture. At this stage, the proposed work uses
of ine images produced using AFM and perform image analysis to
create virtual surfaces suitable for haptics force analysis. The research
work is in the process of extension from of ine to online process
where interaction will be done directly on the material surface for
realistic analysis.
Abstract: Advances in processors architecture, such as multicore,
increase the size of complexity of parallel computer systems.
With multi-core architecture there are different parallel languages
that can be used to run parallel programs. One of these languages is
OpenMP which embedded in C/Cµ or FORTRAN. Because of this
new architecture and the complexity, it is very important to evaluate
the performance of OpenMP constructs, kernels, and application
program on multi-core systems. Performance is the activity of
collecting the information about the execution characteristics of a
program. Performance tools consists of at least three interfacing
software layers, including instrumentation, measurement, and
analysis. The instrumentation layer defines the measured
performance events. The measurement layer determines what
performance event is actually captured and how it is measured by the
tool. The analysis layer processes the performance data and
summarizes it into a form that can be displayed in performance tools.
In this paper, a number of OpenMP performance tools are surveyed,
explaining how each is used to collect, analyse, and display data
collection.
Abstract: In general, reports are a form of representing data in
such way that user gets the information he needs. They can be built in
various ways, from the simplest (“select from") to the most complex
ones (results derived from different sources/tables with complex
formulas applied). Furthermore, rules of calculations could be written
as a program hard code or built in the database to be used by dynamic
code. This paper will introduce two types of reports, defined in the
DB structure. The main goal is to manage calculations in optimal
way, keeping maintenance of reports as simple and smooth as
possible.
Abstract: HIV-1 genome is highly heterogeneous. Due to this
variation, features of HIV-I genome is in a wide range. For this
reason, the ability to infection of the virus changes depending on
different chemokine receptors. From this point of view, R5 HIV
viruses use CCR5 coreceptor while X4 viruses use CXCR5 and
R5X4 viruses can utilize both coreceptors. Recently, in
Bioinformatics, R5X4 viruses have been studied to classify by using
the experiments on HIV-1 genome.
In this study, R5X4 type of HIV viruses were classified using
Auto Regressive (AR) model through Artificial Neural Networks
(ANNs). The statistical data of R5X4, R5 and X4 viruses was
analyzed by using signal processing methods and ANNs. Accessible
residues of these virus sequences were obtained and modeled by AR
model since the dimension of residues is large and different from
each other. Finally the pre-processed data was used to evolve various
ANN structures for determining R5X4 viruses. Furthermore ROC
analysis was applied to ANNs to show their real performances. The
results indicate that R5X4 viruses successfully classified with high
sensitivity and specificity values training and testing ROC analysis
for RBF, which gives the best performance among ANN structures.
Abstract: As the Internet technology has developed rapidly, the
number of identities (IDs) managed by each individual person has
increased and various ID management technologies have been
developed to assist users. However, most of these technologies are
vulnerable to the existing hacking methods such as phishing attacks
and key-logging. If the administrator-s password is exposed, an
attacker can access the entire contents of the stolen user-s data files in
other devices. To solve these problems, we propose here a new ID
management scheme based on a Single Password Protocol. The paper
presents the details of the new scheme as well as a formal analysis of
the method using BAN Logic.
Abstract: Association rules are an important problem in data
mining. Massively increasing volume of data in real life databases
has motivated researchers to design novel and incremental algorithms
for association rules mining. In this paper, we propose an incremental
association rules mining algorithm that integrates shocking
interestingness criterion during the process of building the model. A
new interesting measure called shocking measure is introduced. One
of the main features of the proposed approach is to capture the user
background knowledge, which is monotonically augmented. The
incremental model that reflects the changing data and the user beliefs
is attractive in order to make the over all KDD process more
effective and efficient. We implemented the proposed approach and
experiment it with some public datasets and found the results quite
promising.
Abstract: Web sites are rapidly becoming the preferred media
choice for our daily works such as information search, company
presentation, shopping, and so on. At the same time, we live in a
period where visual appearances play an increasingly important
role in our daily life. In spite of designers- effort to develop a web
site which be both user-friendly and attractive, it would be difficult
to ensure the outcome-s aesthetic quality, since the visual
appearance is a matter of an individual self perception and opinion.
In this study, it is attempted to develop an automatic system for
web pages aesthetic evaluation which are the building blocks of
web sites. Based on the image processing techniques and artificial
neural networks, the proposed method would be able to categorize
the input web page according to its visual appearance and aesthetic
quality. The employed features are multiscale/multidirectional
textural and perceptual color properties of the web pages, fed to
perceptron ANN which has been trained as the evaluator. The
method is tested using university web sites and the results
suggested that it would perform well in the web page aesthetic
evaluation tasks with around 90% correct categorization.
Abstract: Investigating language acquisition is one of the most
challenging problems in the area of studying language. Syllable
learning as a level of language acquisition has a considerable
significance since it plays an important role in language acquisition.
Because of impossibility of studying language acquisition directly
with children, especially in its developmental phases, computer
models will be useful in examining language acquisition. In this
paper a computer model of early language learning for syllable
learning is proposed. It is guided by a conceptual model of syllable
learning which is named Directions Into Velocities of Articulators
model (DIVA). The computer model uses simple associational and
reinforcement learning rules within neural network architecture
which are inspired by neuroscience. Our simulation results verify the
ability of the proposed computer model in producing phonemes
during babbling and early speech. Also, it provides a framework for
examining the neural basis of language learning and communication
disorders.
Abstract: Biclustering is a very useful data mining technique for
identifying patterns where different genes are co-related based on a
subset of conditions in gene expression analysis. Association rules
mining is an efficient approach to achieve biclustering as in
BIMODULE algorithm but it is sensitive to the value given to its
input parameters and the discretization procedure used in the
preprocessing step, also when noise is present, classical association
rules miners discover multiple small fragments of the true bicluster,
but miss the true bicluster itself. This paper formally presents a
generalized noise tolerant bicluster model, termed as μBicluster. An
iterative algorithm termed as BIDENS based on the proposed model
is introduced that can discover a set of k possibly overlapping
biclusters simultaneously. Our model uses a more flexible method to
partition the dimensions to preserve meaningful and significant
biclusters. The proposed algorithm allows discovering biclusters that
hard to be discovered by BIMODULE. Experimental study on yeast,
human gene expression data and several artificial datasets shows that
our algorithm offers substantial improvements over several
previously proposed biclustering algorithms.
Abstract: In this paper, a new method of controlling position of AC Servomotor using Field Programmable Gate Array (FPGA). FPGA controller is used to generate direction and the number of pulses required to rotate for a given angle. Pulses are sent as a square wave, the number of pulses determines the angle of rotation and frequency of square wave determines the speed of rotation. The proposed control scheme has been realized using XILINX FPGA SPARTAN XC3S400 and tested using MUMA012PIS model Alternating Current (AC) servomotor. Experimental results show that the position of the AC Servo motor can be controlled effectively. KeywordsAlternating Current (AC), Field Programmable Gate Array (FPGA), Liquid Crystal Display (LCD).
Abstract: For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new
sequence features, the frameshift sensitivity (FS) and the accumulative
penta-mer complexity (APC), were discovered which
were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments
were performed on six model organisms to test the power
of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons
and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have
great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.
Abstract: In this work a software simulation model has been
proposed for two driven wheels mobile robot path planning; that can
navigate in dynamic environment with static distributed obstacles.
The work involves utilizing Bezier curve method in a proposed N
order matrix form; for engineering the mobile robot path. The Bezier
curve drawbacks in this field have been diagnosed. Two directions:
Up and Right function has been proposed; Probability Recursive
Function (PRF) to overcome those drawbacks.
PRF functionality has been developed through a proposed;
obstacle detection function, optimization function which has the
capability of prediction the optimum path without comparison
between all feasible paths, and N order Bezier curve function that
ensures the drawing of the obtained path.
The simulation results that have been taken showed; the mobile
robot travels successfully from starting point and reaching its goal
point. All obstacles that are located in its way have been avoided.
This navigation is being done successfully using the proposed PRF
techniques.
Abstract: This paper represents four unsupervised clustering algorithms namely sIB, RandomFlatClustering, FarthestFirst, and FilteredClusterer that previously works have not been used for network traffic classification. The methodology, the result, the products of the cluster and evaluation of these algorithms with efficiency of each algorithm from accuracy are shown. Otherwise, the efficiency of these algorithms considering form the time that it use to generate the cluster quickly and correctly. Our work study and test the best algorithm by using classify traffic anomaly in network traffic with different attribute that have not been used before. We analyses the algorithm that have the best efficiency or the best learning and compare it to the previously used (K-Means). Our research will be use to develop anomaly detection system to more efficiency and more require in the future.
Abstract: A key aspect of the design of any software system is
its architecture. An architecture description provides a formal model
of the architecture in terms of components and connectors and how
they are composed together. COSA (Component-Object based
Software Structures), is based on object-oriented modeling and
component-based modeling. The model improves the reusability by
increasing extensibility, evolvability, and compositionality of the
software systems. This paper presents the COSA modelling tool
which help architects the possibility to verify the structural coherence
of a given system and to validate its semantics with COSA approach.
Abstract: There are many problems associated with the World Wide
Web: getting lost in the hyperspace; the web content is still accessible only
to humans and difficulties of web administration. The solution to these
problems is the Semantic Web which is considered to be the extension
for the current web presents information in both human readable and
machine processable form. The aim of this study is to reach new
generic foundation architecture for the Semantic Web because there
is no clear architecture for it, there are four versions, but still up to
now there is no agreement for one of these versions nor is there a
clear picture for the relation between different layers and
technologies inside this architecture. This can be done depending on
the idea of previous versions as well as Gerber-s evaluation method
as a step toward an agreement for one Semantic Web architecture.
Abstract: In Data mining, Fuzzy clustering algorithms have
demonstrated advantage over crisp clustering algorithms in dealing
with the challenges posed by large collections of vague and uncertain
natural data. This paper reviews concept of fuzzy logic and fuzzy
clustering. The classical fuzzy c-means algorithm is presented and its
limitations are highlighted. Based on the study of the fuzzy c-means
algorithm and its extensions, we propose a modification to the cmeans
algorithm to overcome the limitations of it in calculating the
new cluster centers and in finding the membership values with
natural data. The efficiency of the new modified method is
demonstrated on real data collected for Bhutan-s Gross National
Happiness (GNH) program.
Abstract: Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.
Abstract: As the network based technologies become
omnipresent, demands to secure networks/systems against threat
increase. One of the effective ways to achieve higher security is
through the use of intrusion detection systems (IDS), which are a
software tool to detect anomalous in the computer or network. In this
paper, an IDS has been developed using an improved machine
learning based algorithm, Locally Linear Neuro Fuzzy Model
(LLNF) for classification whereas this model is originally used for
system identification. A key technical challenge in IDS and LLNF
learning is the curse of high dimensionality. Therefore a feature
selection phase is proposed which is applicable to any IDS. While
investigating the use of three feature selection algorithms, in this
model, it is shown that adding feature selection phase reduces
computational complexity of our model. Feature selection algorithms
require the use of a feature goodness measure. The use of both a
linear and a non-linear measure - linear correlation coefficient and
mutual information- is investigated respectively
Abstract: The rapid expansion of the web is causing the
constant growth of information, leading to several problems such as
increased difficulty of extracting potentially useful knowledge. Web
content mining confronts this problem gathering explicit information
from different web sites for its access and knowledge discovery.
Query interfaces of web databases share common building blocks.
After extracting information with parsing approach, we use a new
data mining algorithm to match a large number of schemas in
databases at a time. Using this algorithm increases the speed of
information matching. In addition, instead of simple 1:1 matching,
they do complex (m:n) matching between query interfaces. In this
paper we present a novel correlation mining algorithm that matches
correlated attributes with smaller cost. This algorithm uses Jaccard
measure to distinguish positive and negative correlated attributes.
After that, system matches the user query with different query
interfaces in special domain and finally chooses the nearest query
interface with user query to answer to it.
Abstract: This paper proposes an auto-classification algorithm
of Web pages using Data mining techniques. We consider the
problem of discovering association rules between terms in a set of
Web pages belonging to a category in a search engine database, and
present an auto-classification algorithm for solving this problem that
are fundamentally based on Apriori algorithm. The proposed
technique has two phases. The first phase is a training phase where
human experts determines the categories of different Web pages, and
the supervised Data mining algorithm will combine these categories
with appropriate weighted index terms according to the highest
supported rules among the most frequent words. The second phase is
the categorization phase where a web crawler will crawl through the
World Wide Web to build a database categorized according to the
result of the data mining approach. This database contains URLs and
their categories.