Abstract: With the development of ubiquitous computing,
current user interaction approaches with keyboard, mouse and pen
are not sufficient. Due to the limitation of these devices the useable
command set is also limited. Direct use of hands as an input device is
an attractive method for providing natural Human Computer
Interaction which has evolved from text-based interfaces through 2D
graphical-based interfaces, multimedia-supported interfaces, to fully
fledged multi-participant Virtual Environment (VE) systems.
Imagine the human-computer interaction of the future: A 3Dapplication
where you can move and rotate objects simply by moving
and rotating your hand - all without touching any input device. In this
paper a review of vision based hand gesture recognition is presented.
The existing approaches are categorized into 3D model based
approaches and appearance based approaches, highlighting their
advantages and shortcomings and identifying the open issues.
Abstract: Censored Production Rule is an extension of standard
production rule, which is concerned with problems of reasoning with
incomplete information, subject to resource constraints and problem
of reasoning efficiently with exceptions. A CPR has a form: IF A
(Condition) THEN B (Action) UNLESS C (Censor), Where C is the
exception condition. Fuzzy CPR are obtained by augmenting
ordinary fuzzy production rule “If X is A then Y is B with an
exception condition and are written in the form “If X is A then Y is B
Unless Z is C. Such rules are employed in situation in which the
fuzzy conditional statement “If X is A then Y is B" holds frequently
and the exception condition “Z is C" holds rarely. Thus “If X is A
then Y is B" part of the fuzzy CPR express important information
while the unless part acts only as a switch that changes the polarity of
“Y is B" to “Y is not B" when the assertion “Z is C" holds. The
proposed approach is an attempt to discover fuzzy censored
production rules from set of discovered fuzzy if then rules in the
form:
A(X)  B(Y) || C(Z).
Abstract: According to development of communications and
web-based technologies in recent years, e-Learning has became very
important for everyone and is seen as one of most dynamic teaching
methods.
Grid computing is a pattern for increasing of computing power
and storage capacity of a system and is based on hardware and
software resources in a network with common purpose. In this article
we study grid architecture and describe its different layers. In this
way, we will analyze grid layered architecture. Then we will
introduce a new suitable architecture for e-Learning which is based
on grid network, and for this reason we call it Grid Learning
Architecture. Various sections and layers of suggested architecture
will be analyzed; especially grid middleware layer that has key role.
This layer is heart of grid learning architecture and, in fact,
regardless of this layer, e-Learning based on grid architecture will
not be feasible.
Abstract: Web services provide significant new benefits for SOAbased
applications, but they also expose significant new security
risks. There are huge number of WS security standards and
processes. At present, there is still a lack of a comprehensive
approach which offers a methodical development in the construction
of secure WS-based SOA. Thus, the main objective of this paper is
to address this needs, presenting a comprehensive method for Web
Services Security guaranty in SOA. The proposed method defines
three stages, Initial Security Analysis, Architectural Security
Guaranty and WS Security Standards Identification. These facilitate,
respectively, the definition and analysis of WS-specific security
requirements, the development of a WS-based security architecture
and the identification of the related WS security standards that the
security architecture must articulate in order to implement the
security services.
Abstract: In this paper a data miner based on the learning
automata is proposed and is called LA-miner. The LA-miner extracts
classification rules from data sets automatically. The proposed
algorithm is established based on the function optimization using
learning automata. The experimental results on three benchmarks
indicate that the performance of the proposed LA-miner is
comparable with (sometimes better than) the Ant-miner (a data miner
algorithm based on the Ant Colony optimization algorithm) and CNZ
(a well-known data mining algorithm for classification).
Abstract: The advantage of solving the complex nonlinear
problems by utilizing fuzzy logic methodologies is that the
experience or expert-s knowledge described as a fuzzy rule base can
be directly embedded into the systems for dealing with the problems.
The current limitation of appropriate and automated designing of
fuzzy controllers are focused in this paper. The structure discovery
and parameter adjustment of the Branched T-S fuzzy model is
addressed by a hybrid technique of type constrained sparse tree
algorithms. The simulation result for different system model is
evaluated and the identification error is observed to be minimum.
Abstract: Salient points are frequently used to represent local
properties of the image in content-based image retrieval. In this paper,
we present a reduction algorithm that extracts the local most salient
points such that they not only give a satisfying representation of an
image, but also make the image retrieval process efficiently. This
algorithm recursively reduces the continuous point set by their
corresponding saliency values under a top-down approach. The
resulting salient points are evaluated with an image retrieval system
using Hausdoff distance. In this experiment, it shows that our method
is robust and the extracted salient points provide better retrieval
performance comparing with other point detectors.
Abstract: There are some existing Java benchmarks, application benchmarks as well as micro benchmarks or mixture both of them,such as: Java Grande, Spec98, CaffeMark, HBech, etc. But none of them deal with behaviors of multi tasks operating systems. As a result, the achieved outputs are not satisfied for performance evaluation engineers. Behaviors of multi tasks operating systems are based on a schedule management which is employed in these systems. Different processes can have different priority to share the same resources. The time is measured by estimating from applications started to it is finished does not reflect the real time value which the system need for running those programs. New approach to this problem should be done. Having said that, in this paper we present a new Java benchmark, named FHOJ benchmark, which directly deals with multi tasks behaviors of a system. Our study shows that in some cases, results from FHOJ benchmark are far more reliable in comparison with some existing Java benchmarks.
Abstract: In recent five decades, textured yarns of polyester fiber produced by false twist method are the most
important and mass-produced manmade fibers. There are
many parameters of cross section which affect the physical and mechanical properties of textured yarns. These parameters
are surface area, perimeter, equivalent diameter, large
diameter, small diameter, convexity, stiffness, eccentricity, and hydraulic diameter. These parameters were evaluated by
digital image processing techniques. To find trends between production criteria and evaluated parameters of cross section, three criteria of production line have been adjusted and different types of yarns were produced. These criteria are
temperature, drafting ratio, and D/Y ratio. Finally the relations between production criteria and cross section parameters were
considered. The results showed that the presented technique can recognize and measure the parameters of fiber cross section in acceptable accuracy. Also, the optimum condition
of adjustments has been estimated from results of image analysis evaluation.
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Abstract: Computing and maintaining network structures for efficient
data aggregation incurs high overhead for dynamic events
where the set of nodes sensing an event changes with time. Moreover,
structured approaches are sensitive to the waiting time that is used
by nodes to wait for packets from their children before forwarding
the packet to the sink. An optimal routing and data aggregation
scheme for wireless sensor networks is proposed in this paper. We
propose Tree on DAG (ToD), a semistructured approach that uses
Dynamic Forwarding on an implicitly constructed structure composed
of multiple shortest path trees to support network scalability. The key
principle behind ToD is that adjacent nodes in a graph will have
low stretch in one of these trees in ToD, thus resulting in early
aggregation of packets. Based on simulations on a 2,000-node Mica2-
based network, we conclude that efficient aggregation in large-scale
networks can be achieved by our semistructured approach.
Abstract: Generally, administrative systems in an academic
environment are disjoint and support independent queries. The
objective in this work is to semantically connect these independent
systems to provide support to queries run on the integrated platform.
The proposed framework, by enriching educational material in the
legacy systems, provides a value-added semantics layer where
activities such as annotation, query and reasoning can be carried out
to support management requirements. We discuss the development of
this ontology framework with a case study of UAE University
program administration to show how semantic web technologies can
be used by administration to develop student profiles for better
academic program management.
Abstract: The quest of providing more secure identification
system has led to a rise in developing biometric systems. Dorsal
hand vein pattern is an emerging biometric which has attracted the
attention of many researchers, of late. Different approaches have
been used to extract the vein pattern and match them. In this work,
Principle Component Analysis (PCA) which is a method that has
been successfully applied on human faces and hand geometry is
applied on the dorsal hand vein pattern. PCA has been used to obtain
eigenveins which is a low dimensional representation of vein pattern
features. Low cost CCD cameras were used to obtain the vein
images. The extraction of the vein pattern was obtained by applying
morphology. We have applied noise reduction filters to enhance the
vein patterns. The system has been successfully tested on a database
of 200 images using a threshold value of 0.9. The results obtained are
encouraging.
Abstract: Detection, feature extraction and pose estimation of
people in images and video is made challenging by the variability of
human appearance, the complexity of natural scenes and the high
dimensionality of articulated body models and also the important
field in Image, Signal and Vision Computing in recent years. In this
paper, four types of people in 2D dimension image will be tested and
proposed. The system will extract the size and the advantage of them
(such as: tall fat, short fat, tall thin and short thin) from image. Fat
and thin, according to their result from the human body that has been
extract from image, will be obtained. Also the system extract every
size of human body such as length, width and shown them in output.
Abstract: We have developed a database for membrane protein functions, which has more than 3000 experimental data on functionally important amino acid residues in membrane proteins along with sequence, structure and literature information. Further, we have proposed different methods for identifying membrane proteins based on their functions: (i) discrimination of membrane transport proteins from other globular and membrane proteins and classifying them into channels/pores, electrochemical and active transporters, and (ii) β-signal for the insertion of mitochondrial β-barrel outer membrane proteins and potential targets. Our method showed an accuracy of 82% in discriminating transport proteins and 68% to classify them into three different transporters. In addition, we have identified a motif for targeting β-signal and potential candidates for mitochondrial β-barrel membrane proteins. Our methods can be used as effective tools for genome-wide annotations.
Abstract: An optimal solution for a large number of constraint
satisfaction problems can be found using the technique of
substitution and elimination of variables analogous to the technique
that is used to solve systems of equations. A decision function
f(A)=max(A2) is used to determine which variables to eliminate. The
algorithm can be expressed in six lines and is remarkable in both its
simplicity and its ability to find an optimal solution. However it is
inefficient in that it needs to square the updated A matrix after each
variable elimination. To overcome this inefficiency the algorithm is
analyzed and it is shown that the A matrix only needs to be squared
once at the first step of the algorithm and then incrementally updated
for subsequent steps, resulting in significant improvement and an
algorithm complexity of O(n3).
Abstract: The necessity of solving multi dimensional
complicated scientific problems beside the necessity of several
objective functions optimization are the most motive reason of born
of artificial intelligence and heuristic methods.
In this paper, we introduce a new method for multiobjective
optimization based on learning automata. In the proposed method,
search space divides into separate hyper-cubes and each cube is
considered as an action. After gathering of all objective functions
with separate weights, the cumulative function is considered as the
fitness function. By the application of all the cubes to the cumulative
function, we calculate the amount of amplification of each action and
the algorithm continues its way to find the best solutions. In this
Method, a lateral memory is used to gather the significant points of
each iteration of the algorithm. Finally, by considering the
domination factor, pareto front is estimated. Results of several
experiments show the effectiveness of this method in comparison
with genetic algorithm based method.
Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Abstract: Short Message Service (SMS) has grown in
popularity over the years and it has become a common way of
communication, it is a service provided through General System
for Mobile Communications (GSM) that allows users to send text
messages to others.
SMS is usually used to transport unclassified information, but
with the rise of mobile commerce it has become a popular tool for
transmitting sensitive information between the business and its
clients. By default SMS does not guarantee confidentiality and
integrity to the message content.
In the mobile communication systems, security (encryption)
offered by the network operator only applies on the wireless link.
Data delivered through the mobile core network may not be
protected. Existing end-to-end security mechanisms are provided
at application level and typically based on public key
cryptosystem.
The main concern in a public-key setting is the authenticity of
the public key; this issue can be resolved by identity-based (IDbased)
cryptography where the public key of a user can be derived
from public information that uniquely identifies the user.
This paper presents an encryption mechanism based on the IDbased
scheme using Elliptic curves to provide end-to-end security
for SMS. This mechanism has been implemented over the standard
SMS network architecture and the encryption overhead has been
estimated and compared with RSA scheme. This study indicates
that the ID-based mechanism has advantages over the RSA
mechanism in key distribution and scalability of increasing
security level for mobile service.