Abstract: Chemical Reaction Optimization (CRO) is an
optimization metaheuristic inspired by the nature of chemical
reactions as a natural process of transforming the substances from
unstable to stable states. Starting with some unstable molecules with
excessive energy, a sequence of interactions takes the set to a state of
minimum energy. Researchers reported successful application of the
algorithm in solving some engineering problems, like the quadratic
assignment problem, with superior performance when compared with
other optimization algorithms. We adapted this optimization
algorithm to the Printed Circuit Board Drilling Problem (PCBDP)
towards reducing the drilling time and hence improving the PCB
manufacturing throughput. Although the PCBDP can be viewed as
instance of the popular Traveling Salesman Problem (TSP), it has
some characteristics that would require special attention to the
transactions that explore the solution landscape. Experimental test
results using the standard CROToolBox are not promising for
practically sized problems, while it could find optimal solutions for
artificial problems and small benchmarks as a proof of concept.
Abstract: A Distributed Denial of Service (DDoS) attack is a
major threat to cyber security. It originates from the network layer or
the application layer of compromised/attacker systems which are
connected to the network. The impact of this attack ranges from the
simple inconvenience to use a particular service to causing major
failures at the targeted server. When there is heavy traffic flow to a
target server, it is necessary to classify the legitimate access and
attacks. In this paper, a novel method is proposed to detect DDoS
attacks from the traces of traffic flow. An access matrix is created
from the traces. As the access matrix is multi dimensional, Principle
Component Analysis (PCA) is used to reduce the attributes used for
detection. Two classifiers Naive Bayes and K-Nearest neighborhood
are used to classify the traffic as normal or abnormal. The
performance of the classifier with PCA selected attributes and actual
attributes of access matrix is compared by the detection rate and
False Positive Rate (FPR).
Abstract: In this paper, we propose a method for three-dimensional
(3-D)-model indexing based on defining a new
descriptor, which we call new descriptor using spherical harmonics.
The purpose of the method is to minimize, the processing time on the
database of objects models and the searching time of similar objects
to request object.
Firstly we start by defining the new descriptor using a new
division of 3-D object in a sphere. Then we define a new distance
which will be used in the search for similar objects in the database.
Abstract: Wireless mesh networking is rapidly gaining in
popularity with a variety of users: from municipalities to enterprises,
from telecom service providers to public safety and military
organizations. This increasing popularity is based on two basic facts:
ease of deployment and increase in network capacity expressed in
bandwidth per footage; WMNs do not rely on any fixed
infrastructure. Many efforts have been used to maximizing
throughput of the network in a multi-channel multi-radio wireless
mesh network. Current approaches are purely based on either static or
dynamic channel allocation approaches. In this paper, we use a
hybrid multichannel multi radio wireless mesh networking
architecture, where static and dynamic interfaces are built in the
nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it
considers optimization for both throughput and delay in the channel
allocation. The assignment of the channel has been allocated to be codependent
with the routing problem in the wireless mesh network and
that should be based on passage flow on every link. Temporal and
spatial relationship rises to re compute the channel assignment every
time when the pattern changes in mesh network, channel assignment
algorithms assign channels in network. In this paper a computing
path which captures the available path bandwidth is the proposed
information and the proficient routing protocol based on the new path
which provides both static and dynamic links. The consistency
property guarantees that each node makes an appropriate packet
forwarding decision and balancing the control usage of the network,
so that a data packet will traverse through the right path.
Abstract: Evaluating the performance of a simulator in the
CAVE has to be confirmed by encouraging people to live the
experience of virtual reality. In this paper, a detailed procedure of
recording video is presented. Limitations of the experimental device
are firstly exposed. Then, solutions for improving this idea are finally
described.
Abstract: In this paper a new algorithm to generate random
simple polygons from a given set of points in a two dimensional
plane is designed. The proposed algorithm uses a genetic algorithm to
generate polygons with few vertices. A new merge algorithm is
presented which converts any two polygons into a simple polygon.
This algorithm at first changes two polygons into a polygonal chain
and then the polygonal chain is converted into a simple polygon. The
process of converting a polygonal chain into a simple polygon is
based on the removal of intersecting edges. The experiments results
show that the proposed algorithm has the ability to generate a great
number of different simple polygons and has better performance in
comparison to celebrated algorithms such as space partitioning and
steady growth.
Abstract: The main aim of a communication system is to
achieve maximum performance. In Cognitive Radio any user or
transceiver has ability to sense best suitable channel, while channel is
not in use. It means an unlicensed user can share the spectrum of a
licensed user without any interference. Though, the spectrum sensing
consumes a large amount of energy and it can reduce by applying
various artificial intelligent methods for determining proper spectrum
holes. It also increases the efficiency of Cognitive Radio Network
(CRN). In this survey paper we discuss the use of different learning
models and implementation of Artificial Neural Network (ANN) to
increase the learning and decision making capacity of CRN without
affecting bandwidth, cost and signal rate.
Abstract: Spam is any unwanted electronic message or material
in any form posted too many people. As the world is growing as
global world, social networking sites play an important role in
making world global providing people from different parts of the
world a platform to meet and express their views. Among different
social networking sites Facebook become the leading one. With
increase in usage different users start abusive use of Facebook by
posting or creating ways to post spam. This paper highlights the
potential spam types nowadays Facebook users’ faces. This paper
also provide the reason how user become victim to spam attack. A
methodology is proposed in the end discusses how to handle different
types of spam.
Abstract: In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.
Abstract: Cloud computing (CC) has already gained overall
appreciation in research and practice. Whereas the willingness to
integrate cloud services in various IT environments is still unbroken,
the previous CC procurement processes run mostly in an unorganized
and non-standardized way. In practice, a sufficiently specific, yet
applicable business process for the important acquisition phase is
often lacking. And research does not appropriately remedy this
deficiency yet. Therefore, this paper introduces a field-tested
approach for CC procurement. Based on an extensive literature
review and augmented by expert interviews, we designed a model
that is validated and further refined through an in-depth real-life case
study. For the detailed process description, we apply the event-driven
process chain notation (EPC). The gained valuable insights into the
case study may help CC research to shift to a more socio-technical
area. For practice, next to giving useful organizational instructions
we will provide extended checklists and lessons learned.
Abstract: Locating Radio Controlled (RC) devices using their
unintended emissions has a great interest considering security
concerns. Weak nature of these emissions requires near field
localization approach since it is hard to detect these signals in far
field region of array. Instead of only angle estimation, near field
localization also requires range estimation of the source which makes
this method more complicated than far field models. Challenges of
locating such devices in a near field region and real time environment
are analyzed in this paper. An ESPRIT like near field localization
scheme is utilized for both angle and range estimation. 1-D search
with symmetric subarrays is provided. Two 7 element uniform linear
antenna arrays (ULA) are employed for locating RC source.
Experiment results of location estimation for one unintended emitting
walkie-talkie for different positions are given.
Abstract: In-memory database systems are becoming popular
due to the availability and affordability of sufficiently large RAM and
processors in modern high-end servers with the capacity to manage
large in-memory database transactions. While fast and reliable inmemory
systems are still being developed to overcome cache misses,
CPU/IO bottlenecks and distributed transaction costs, disk-based data
stores still serve as the primary persistence. In addition, with the
recent growth in multi-tenancy cloud applications and associated
security concerns, many organisations consider the trade-offs and
continue to require fast and reliable transaction processing of diskbased
database systems as an available choice. For these
organizations, the only way of increasing throughput is by improving
the performance of disk-based concurrency control. This warrants a
hybrid database system with the ability to selectively apply an
enhanced disk-based data management within the context of inmemory
systems that would help improve overall throughput.
The general view is that in-memory systems substantially
outperform disk-based systems. We question this assumption and
examine how a modified variation of access invariance that we call
enhanced memory access, (EMA) can be used to allow very high
levels of concurrency in the pre-fetching of data in disk-based
systems. We demonstrate how this prefetching in disk-based systems
can yield close to in-memory performance, which paves the way for
improved hybrid database systems. This paper proposes a novel EMA
technique and presents a comparative study between disk-based EMA
systems and in-memory systems running on hardware configurations
of equivalent power in terms of the number of processors and their
speeds. The results of the experiments conducted clearly substantiate
that when used in conjunction with all concurrency control
mechanisms, EMA can increase the throughput of disk-based systems
to levels quite close to those achieved by in-memory system. The
promising results of this work show that enhanced disk-based
systems facilitate in improving hybrid data management within the
broader context of in-memory systems.
Abstract: In this study, we propose a novel technique for acoustic
echo suppression (AES) during speech recognition under barge-in
conditions. Conventional AES methods based on spectral subtraction
apply fixed weights to the estimated echo path transfer function
(EPTF) at the current signal segment and to the EPTF estimated until
the previous time interval. However, the effects of echo path changes
should be considered for eliminating the undesired echoes. We
describe a new approach that adaptively updates weight parameters in
response to abrupt changes in the acoustic environment due to
background noises or double-talk. Furthermore, we devised a voice
activity detector and an initial time-delay estimator for barge-in speech
recognition in communication networks. The initial time delay is
estimated using log-spectral distance measure, as well as
cross-correlation coefficients. The experimental results show that the
developed techniques can be successfully applied in barge-in speech
recognition systems.
Abstract: In this paper we describe the Levenvberg-Marquardt
(LM) algorithm for identification and equalization of CDMA
signals received by an antenna array in communication channels.
The synthesis explains the digital separation and equalization of
signals after propagation through multipath generating intersymbol
interference (ISI). Exploiting discrete data transmitted and three
diversities induced at the reception, the problem can be composed
by the Block Component Decomposition (BCD) of a tensor of
order 3 which is a new tensor decomposition generalizing the
PARAFAC decomposition. We optimize the BCD decomposition by
Levenvberg-Marquardt method gives encouraging results compared to
classical alternating least squares algorithm (ALS). In the equalization
part, we use the Minimum Mean Square Error (MMSE) to perform
the presented method. The simulation results using the LM algorithm
are important.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: In previous study, technique to estimate a self-location by using a lunar image is proposed.We consider the improvement of the conventional method in consideration of FPGA implementationin this paper. Specifically, we introduce Artificial Bee Colony algorithm for reduction of search time.In addition, we use fixed point arithmetic to enable high-speed operation on FPGA.
Abstract: Operations research science (OR) deals with good
success in developing and applying scientific methods for problem
solving and decision-making. However, by using OR techniques, we
can enhance the use of computer decision support systems to achieve
optimal management for institutions. OR applies comprehensive
analysis including all factors that effect on it and builds mathematical
modeling to solve business or organizational problems. In addition, it
improves decision-making and uses available resources efficiently.
The adoption of OR by universities would definitely contributes to
the development and enhancement of the performance of OR
techniques. This paper provides an understanding of the structures,
approaches and models of OR in problem solving and decisionmaking.
Abstract: Artificial Neural Network (ANN) can be trained using
back propagation (BP). It is the most widely used algorithm for
supervised learning with multi-layered feed-forward networks.
Efficient learning by the BP algorithm is required for many practical
applications. The BP algorithm calculates the weight changes of
artificial neural networks, and a common approach is to use a twoterm
algorithm consisting of a learning rate (LR) and a momentum
factor (MF). The major drawbacks of the two-term BP learning
algorithm are the problems of local minima and slow convergence
speeds, which limit the scope for real-time applications. Recently the
addition of an extra term, called a proportional factor (PF), to the
two-term BP algorithm was proposed. The third increases the speed
of the BP algorithm. However, the PF term also reduces the
convergence of the BP algorithm, and criteria for evaluating
convergence are required to facilitate the application of the three
terms BP algorithm. Although these two seem to be closely related,
as described later, we summarize various improvements to overcome
the drawbacks. Here we compare the different methods of
convergence of the new three-term BP algorithm.
Abstract: Many wireless sensor network applications require
K-coverage of the monitored area. In this paper, we propose a
scalable harmony search based algorithm in terms of execution
time, K-Coverage Enhancement Algorithm (KCEA), it attempts to
enhance initial coverage, and achieve the required K-coverage degree
for a specific application efficiently. Simulation results show that
the proposed algorithm achieves coverage improvement of 5.34%
compared to K-Coverage Rate Deployment (K-CRD), which achieves
1.31% when deploying one additional sensor. Moreover, the proposed
algorithm is more time efficient.
Abstract: Nowadays social media information, such as news,
links, images, or VDOs, is shared extensively. However, the
effectiveness of disseminating information through social media
lacks in quality: less fact checking, more biases, and several rumors.
Many researchers have investigated about credibility on Twitter, but
there is no the research report about credibility information on
Facebook. This paper proposes features for measuring credibility on
Facebook information. We developed the system for credibility on
Facebook. First, we have developed FB credibility evaluator for
measuring credibility of each post by manual human’s labelling. We
then collected the training data for creating a model using Support
Vector Machine (SVM). Secondly, we developed a chrome extension
of FB credibility for Facebook users to evaluate the credibility of
each post. Based on the usage analysis of our FB credibility chrome
extension, about 81% of users’ responses agree with suggested
credibility automatically computed by the proposed system.