Abstract: In this paper, a new SMC (Sliding Mode Control)
method with MP (Model Predictive Control) integral action for the
slip suppression of EV (Electric Vehicle) under braking is proposed.
The proposed method introduce the integral term with standard SMC
gain , where the integral gain is optimized for each control period by
the MPC algorithms. The aim of this method is to improve the safety
and the stability of EVs under braking by controlling the wheel slip
ratio. There also include numerical simulation results to demonstrate
the effectiveness of the method.
Abstract: Localization of nodes is one of the key issues of
Wireless Sensor Network (WSN) that gained a wide attention in
recent years. The existing localization techniques can be generally
categorized into two types: range-based and range-free. Compared
with rang-based schemes, the range-free schemes are more costeffective,
because no additional ranging devices are needed. As a
result, we focus our research on the range-free schemes. In this paper
we study three types of range-free location algorithms to compare the
localization error and energy consumption of each one. Centroid
algorithm requires a normal node has at least three neighbor anchors,
while DV-hop algorithm doesn’t have this requirement. The third
studied algorithm is the amorphous algorithm similar to DV-Hop
algorithm, and the idea is to calculate the hop distance between two
nodes instead of the linear distance between them. The simulation
results show that the localization accuracy of the amorphous
algorithm is higher than that of other algorithms and the energy
consumption does not increase too much.
Abstract: This paper is concerned with the single-item
continuous review inventory system in which demand is stochastic
and discrete. The budget consumed for purchasing the ordered items
is not restricted but it incurs extra cost when exceeding specific
value. The unit purchasing price depends on the quantity ordered
under the all-units discounts cost structure. In many actual systems,
the budget as a resource which is occupied by the purchased items is
limited and the system is able to confront the resource shortage by
charging more costs. Thus, considering the resource shortage costs as
a part of system costs, especially when the amount of resource
occupied by the purchased item is influenced by quantity discounts,
is well motivated by practical concerns. In this paper, an optimization
problem is formulated for finding the optimal (r, Q) policy, when the
system is influenced by the budget limitation and a discount pricing
simultaneously. Properties of the cost function are investigated and
then an algorithm based on a one-dimensional search procedure is
proposed for finding an optimal (r, Q) policy which minimizes the
expected system costs.
Abstract: Wireless Sensor Networks (WSNs) enable new
applications and need non-conventional paradigms for the protocol
because of energy and bandwidth constraints, In WSN, sensor node’s
life is a critical parameter. Research on life extension is based on
Low-Energy Adaptive Clustering Hierarchy (LEACH) scheme,
which rotates Cluster Head (CH) among sensor nodes to distribute
energy consumption over all network nodes. CH selection in WSN
affects network energy efficiency greatly. This study proposes an
improved CH selection for efficient data aggregation in sensor
networks. This new algorithm is based on Bacterial Foraging
Optimization (BFO) incorporated in LEACH.
Abstract: Feature selection has been used in many fields such as
classification, data mining and object recognition and proven to be
effective for removing irrelevant and redundant features from the
original dataset. In this paper, a new design of distributed intrusion
detection system using a combination feature selection model based
on bees and decision tree. Bees algorithm is used as the search
strategy to find the optimal subset of features, whereas decision tree
is used as a judgment for the selected features. Both the produced
features and the generated rules are used by Decision Making Mobile
Agent to decide whether there is an attack or not in the networks.
Decision Making Mobile Agent will migrate through the networks,
moving from node to another, if it found that there is an attack on one
of the nodes, it then alerts the user through User Interface Agent or
takes some action through Action Mobile Agent. The KDD Cup 99
dataset is used to test the effectiveness of the proposed system. The
results show that even if only four features are used, the proposed
system gives a better performance when it is compared with the
obtained results using all 41 features.
Abstract: File sharing in networks is generally achieved using
Peer-to-Peer (P2P) applications. Structured P2P approaches are
widely used in adhoc networks due to its distributed and scalability
features. Efficient mechanisms are required to handle the huge
amount of data distributed to all peers. The intrinsic characteristics of
P2P system makes for easier content distribution when compared to
client-server architecture. All the nodes in a P2P network act as both
client and server, thus, distributing data takes lesser time when
compared to the client-server method. CHORD protocol is a resource
routing based where nodes and data items are structured into a 1-
dimensional ring. The structured lookup algorithm of Chord is
advantageous for distributed P2P networking applications. However,
structured approach improves lookup performance in a high
bandwidth wired network it could contribute to unnecessary overhead
in overlay networks leading to degradation of network performance.
In this paper, the performance of existing CHORD protocol on
Wireless Mesh Network (WMN) when nodes are static and dynamic
is investigated.
Abstract: Workflow scheduling is an important part of cloud
computing and based on different criteria it decides cost, execution
time, and performances. A cloud workflow system is a platform
service facilitating automation of distributed applications based on
new cloud infrastructure. An aspect which differentiates cloud
workflow system from others is market-oriented business model, an
innovation which challenges conventional workflow scheduling
strategies. Time and Cost optimization algorithm for scheduling
Hybrid Clouds (TCHC) algorithm decides which resource should be
chartered from public providers is combined with a new De-De
algorithm considering that every instance of single and multiple
workflows work without deadlocks. To offset this, two new concepts
- De-De Dodging Algorithm and Priority Based Decisive Algorithm -
combine with conventional deadlock avoidance issues by proposing
one algorithm that maximizes active (not just allocated) resource use
and reduces Makespan.
Abstract: This paper presents the local mesh co-occurrence
patterns (LMCoP) using HSV color space for image retrieval system.
HSV color space is used in this method to utilize color, intensity and
brightness of images. Local mesh patterns are applied to define the
local information of image and gray level co-occurrence is used to
obtain the co-occurrence of LMeP pixels. Local mesh co-occurrence
pattern extracts the local directional information from local mesh
pattern and converts it into a well-mannered feature vector using gray
level co-occurrence matrix. The proposed method is tested on three
different databases called MIT VisTex, Corel, and STex. Also, this
algorithm is compared with existing methods, and results in terms of
precision and recall are shown in this paper.
Abstract: In this paper, we provided a literature survey on the
artificial stock problem (ASM). The paper began by exploring the
complexity of the stock market and the needs for ASM. ASM
aims to investigate the link between individual behaviors (micro
level) and financial market dynamics (macro level). The variety of
patterns at the macro level is a function of the AFM complexity. The
financial market system is a complex system where the relationship
between the micro and macro level cannot be captured analytically.
Computational approaches, such as simulation, are expected to
comprehend this connection. Agent-based simulation is a simulation
technique commonly used to build AFMs. The paper proceeds by
discussing the components of the ASM. We consider the roles
of behavioral finance (BF) alongside the traditionally risk-averse
assumption in the construction of agent’s attributes. Also, the
influence of social networks in the developing of agents interactions is
addressed. Network topologies such as a small world, distance-based,
and scale-free networks may be utilized to outline economic
collaborations. In addition, the primary methods for developing
agents learning and adaptive abilities have been summarized.
These incorporated approach such as Genetic Algorithm, Genetic
Programming, Artificial neural network and Reinforcement Learning.
In addition, the most common statistical properties (the stylized facts)
of stock that are used for calibration and validation of ASM are
discussed. Besides, we have reviewed the major related previous
studies and categorize the utilized approaches as a part of these
studies. Finally, research directions and potential research questions
are argued. The research directions of ASM may focus on the macro
level by analyzing the market dynamic or on the micro level by
investigating the wealth distributions of the agents.
Abstract: Voting algorithms are extensively used to make
decisions in fault tolerant systems where each redundant module
gives inconsistent outputs. Popular voting algorithms include
majority voting, weighted voting, and inexact majority voters. Each
of these techniques suffers from scenarios where agreements do not
exist for the given voter inputs. This has been successfully overcome
in literature using fuzzy theory. Our previous work concentrated on a
neuro-fuzzy algorithm where training using the neuro system
substantially improved the prediction result of the voting system.
Weight training of Neural Network is sub-optimal. This study
proposes to optimize the weights of the Neural Network using
Artificial Bee Colony algorithm. Experimental results show the
proposed system improves the decision making of the voting
algorithms.
Abstract: Based on application requirements, nodes are static or
mobile in Wireless Sensor Networks (WSNs). Mobility poses
challenges in protocol design, especially at the link layer requiring
mobility adaptation algorithms to localize mobile nodes and predict
link quality to be established with them. This study implements
XMAC and Berkeley Media Access Control (BMAC) routing
protocols to evaluate performance under WSN’s static and mobility
conditions. This paper gives a comparative study of mobility-aware
MAC protocols. Routing protocol performance, based on Average
End to End Delay, Average Packet Delivery Ratio, Average Number
of hops, and Jitter is evaluated.
Abstract: Most of the existing video streaming protocols
provide video services without considering security aspects in
decentralized mobile ad-hoc networks. The security policies adapted
to the currently existing non-streaming protocols, do not comply with
the live video streaming protocols resulting in considerable
vulnerability, high bandwidth consumption and unreliability which
cause severe security threats, low bandwidth and error prone
transmission respectively in video streaming applications. Therefore
a synergized methodology is required to reduce vulnerability and
bandwidth consumption, and enhance reliability in the video
streaming applications in MANET. To ensure the security measures
with reduced bandwidth consumption and improve reliability of the
video streaming applications, a Secure Low-bandwidth Video
Streaming through Reliable Multipath Propagation (SLVRMP)
protocol architecture has been proposed by incorporating the two
algorithms namely Secure Low-bandwidth Video Streaming
Algorithm and Reliable Secure Multipath Propagation Algorithm
using Layered Video Coding in non-overlapping zone routing
network topology. The performances of the proposed system are
compared to those of the other existing secure multipath protocols
Sec-MR, SPREAD using NS 2.34 and the simulation results show
that the performances of the proposed system get considerably
improved.
Abstract: The enormous amount of information stored on the
web increases from one day to the next, exposing the web currently
faced with the inevitable difficulties of research pertinent information
that users really want. The problem today is not limited to expanding
the size of the information highways, but to design a system for
intelligent search. The vast majority of this information is stored in
relational databases, which in turn represent a backend for managing
RDF data of the semantic web. This problem has motivated us to
write this paper in order to establish an effective approach to support
semantic transformation algorithm for SPARQL queries to SQL
queries, more precisely SPARQL SELECT queries; by adopting this
method, the relational database can be questioned easily with
SPARQL queries maintaining the same performance.
Abstract: A robust sequential nonparametric method is proposed
for adaptation to background noise parameters for real-time. The
distribution of background noise was modelled like to Huber
contamination mixture. The method is designed to operate as an
adaptation-unit, which is included inside a detection subsystem of an
integrated multichannel monitoring system. The proposed method
guarantees the given size of a nonasymptotic confidence set for noise
parameters. Properties of the suggested method are rigorously
proved. The proposed algorithm has been successfully tested in real
conditions of a functioning C-OTDR monitoring system, which was
designed to monitor railways.
Abstract: Live video streaming is one of the most widely used
service among end users, yet it is a big challenge for the network
operators in terms of quality. The only way to provide excellent
Quality of Experience (QoE) to the end users is continuous
monitoring of live video streaming. For this purpose, there are several
objective algorithms available that monitor the quality of the video in
a live stream. Subjective tests play a very important role in fine
tuning the results of objective algorithms. As human perception is
considered to be the most reliable source for assessing the quality of a
video stream subjective tests are conducted in order to develop more
reliable objective algorithms. Temporal impairments in a live video
stream can have a negative impact on the end users. In this paper we
have conducted subjective evaluation tests on a set of video
sequences containing temporal impairment known as frame freezing.
Frame Freezing is considered as a transmission error as well as a
hardware error which can result in loss of video frames on the
reception side of a transmission system. In our subjective tests, we
have performed tests on videos that contain a single freezing event
and also for videos that contain multiple freezing events. We have
recorded our subjective test results for all the videos in order to give a
comparison on the available No Reference (NR) objective
algorithms. Finally, we have shown the performance of no reference
algorithms used for objective evaluation of videos and suggested the
algorithm that works better. The outcome of this study shows the
importance of QoE and its effect on human perception. The results
for the subjective evaluation can serve the purpose for validating
objective algorithms.
Abstract: Human beings have the ability to make logical
decisions. Although human decision - making is often optimal, it is
insufficient when huge amount of data is to be classified. Medical
dataset is a vital ingredient used in predicting patient’s health
condition. In other to have the best prediction, there calls for most
suitable machine learning algorithms. This work compared the
performance of Artificial Neural Network (ANN) and Decision Tree
Algorithms (DTA) as regards to some performance metrics using
diabetes data. WEKA software was used for the implementation of
the algorithms. Multilayer Perceptron (MLP) and Radial Basis
Function (RBF) were the two algorithms used for ANN, while
RegTree and LADTree algorithms were the DTA models used. From
the results obtained, DTA performed better than ANN. The Root
Mean Squared Error (RMSE) of MLP is 0.3913 that of RBF is
0.3625, that of RepTree is 0.3174 and that of LADTree is 0.3206
respectively.
Abstract: DNA Barcode provides good sources of needed
information to classify living species. The classification problem has
to be supported with reliable methods and algorithms. To analyze
species regions or entire genomes, it becomes necessary to use the
similarity sequence methods. A large set of sequences can be
simultaneously compared using Multiple Sequence Alignment which
is known to be NP-complete. However, all the used methods are still
computationally very expensive and require significant computational
infrastructure. Our goal is to build predictive models that are highly
accurate and interpretable. In fact, our method permits to avoid the
complex problem of form and structure in different classes of
organisms. The empirical data and their classification performances
are compared with other methods. Evenly, in this study, we present
our system which is consisted of three phases. The first one, is called
transformation, is composed of three sub steps; Electron-Ion
Interaction Pseudopotential (EIIP) for the codification of DNA
Barcodes, Fourier Transform and Power Spectrum Signal Processing.
Moreover, the second phase step is an approximation; it is
empowered by the use of Multi Library Wavelet Neural Networks
(MLWNN). Finally, the third one, is called the classification of DNA
Barcodes, is realized by applying the algorithm of hierarchical
classification.
Abstract: The lifetime of a wireless sensor network can be
effectively increased by using scheduling operations. Once the
sensors are randomly deployed, the task at hand is to find the largest
number of disjoint sets of sensors such that every sensor set provides
complete coverage of the target area. At any instant, only one of these
disjoint sets is switched on, while all other are switched off. This
paper proposes a heuristic search method to find the maximum
number of disjoint sets that completely cover the region. A
population of randomly initialized members is made to explore the
solution space. A set of heuristics has been applied to guide the
members to a possible solution in their neighborhood. The heuristics
escalate the convergence of the algorithm. The best solution explored
by the population is recorded and is continuously updated. The
proposed algorithm has been tested for applications which require
sensing of multiple target points, referred to as point coverage
applications. Results show that the proposed algorithm outclasses the
existing algorithms. It always finds the optimum solution, and that
too by making fewer number of fitness function evaluations than the
existing approaches.
Abstract: In this paper, the specific sound Transmission Loss
(TL) of the Laminated Composite Plate (LCP) with different material
properties in each layer is investigated. The numerical method to
obtain the TL of the LCP is proposed by using elastic plate theory. The
transfer matrix approach is novelty presented for computational
efficiency in solving the numerous layers of dynamic stiffness matrix
(D-matrix) of the LCP. Besides the numerical simulations for
calculating the TL of the LCP, the material properties inverse method
is presented for the design of a laminated composite plate analogous to
a metallic plate with a specified TL. As a result, it demonstrates that
the proposed computational algorithm exhibits high efficiency with a
small number of iterations for achieving the goal. This method can be
effectively employed to design and develop tailor-made materials for
various applications.
Abstract: Steepest descent method is a simple gradient method
for optimization. This method has a slow convergence in heading to
the optimal solution, which occurs because of the zigzag form of the
steps. Barzilai and Borwein modified this algorithm so that it
performs well for problems with large dimensions. Barzilai and
Borwein method results have sparked a lot of research on the method
of steepest descent, including alternate minimization gradient method
and Yuan method. Inspired by previous works, we modified the step
size of the steepest descent method. We then compare the
modification results against the Barzilai and Borwein method,
alternate minimization gradient method and Yuan method for
quadratic function cases in terms of the iterations number and the
running time. The average results indicate that the steepest descent
method with the new step sizes provide good results for small
dimensions and able to compete with the results of Barzilai and
Borwein method and the alternate minimization gradient method for
large dimensions. The new step sizes have faster convergence
compared to the other methods, especially for cases with large
dimensions.