Abstract: We present a solution to the Maxmin u/E parameters
estimation problem of possibility distributions in m-dimensional
case. Our method is based on geometrical approach, where minimal
area enclosing ellipsoid is constructed around the sample. Also we
demonstrate that one can improve results of well-known algorithms
in fuzzy model identification task using Maxmin u/E parameters
estimation.
Abstract: Wireless mesh networking is rapidly gaining in
popularity with a variety of users: from municipalities to enterprises,
from telecom service providers to public safety and military
organizations. This increasing popularity is based on two basic facts:
ease of deployment and increase in network capacity expressed in
bandwidth per footage; WMNs do not rely on any fixed
infrastructure. Many efforts have been used to maximizing
throughput of the network in a multi-channel multi-radio wireless
mesh network. Current approaches are purely based on either static or
dynamic channel allocation approaches. In this paper, we use a
hybrid multichannel multi radio wireless mesh networking
architecture, where static and dynamic interfaces are built in the
nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it
considers optimization for both throughput and delay in the channel
allocation. The assignment of the channel has been allocated to be codependent
with the routing problem in the wireless mesh network and
that should be based on passage flow on every link. Temporal and
spatial relationship rises to re compute the channel assignment every
time when the pattern changes in mesh network, channel assignment
algorithms assign channels in network. In this paper a computing
path which captures the available path bandwidth is the proposed
information and the proficient routing protocol based on the new path
which provides both static and dynamic links. The consistency
property guarantees that each node makes an appropriate packet
forwarding decision and balancing the control usage of the network,
so that a data packet will traverse through the right path.
Abstract: In this paper a new algorithm to generate random
simple polygons from a given set of points in a two dimensional
plane is designed. The proposed algorithm uses a genetic algorithm to
generate polygons with few vertices. A new merge algorithm is
presented which converts any two polygons into a simple polygon.
This algorithm at first changes two polygons into a polygonal chain
and then the polygonal chain is converted into a simple polygon. The
process of converting a polygonal chain into a simple polygon is
based on the removal of intersecting edges. The experiments results
show that the proposed algorithm has the ability to generate a great
number of different simple polygons and has better performance in
comparison to celebrated algorithms such as space partitioning and
steady growth.
Abstract: In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.
Abstract: Testing the first year students of Informatics at the
University of Debrecen revealed that students start their tertiary
studies in programming with a low level of programming knowledge
and algorithmic skills. The possible reasons which lead the students
to this very unfortunate result were examined. The results of the test
were compared to the students’ results in the school leaving exams
and to their self-assessment values. It was found that there is only a
slight connection between the students’ results in the test and in the
school leaving exams, especially at intermediate level. Beyond this,
the school leaving exams do not seem to enable students to evaluate
their own abilities.
Abstract: In this paper we describe the Levenvberg-Marquardt
(LM) algorithm for identification and equalization of CDMA
signals received by an antenna array in communication channels.
The synthesis explains the digital separation and equalization of
signals after propagation through multipath generating intersymbol
interference (ISI). Exploiting discrete data transmitted and three
diversities induced at the reception, the problem can be composed
by the Block Component Decomposition (BCD) of a tensor of
order 3 which is a new tensor decomposition generalizing the
PARAFAC decomposition. We optimize the BCD decomposition by
Levenvberg-Marquardt method gives encouraging results compared to
classical alternating least squares algorithm (ALS). In the equalization
part, we use the Minimum Mean Square Error (MMSE) to perform
the presented method. The simulation results using the LM algorithm
are important.
Abstract: The Simulation based VLSI Implementation of
FELICS (Fast Efficient Lossless Image Compression System)
Algorithm is proposed to provide the lossless image compression and
is implemented in simulation oriented VLSI (Very Large Scale
Integrated). To analysis the performance of Lossless image
compression and to reduce the image without losing image quality
and then implemented in VLSI based FELICS algorithm. In FELICS
algorithm, which consists of simplified adjusted binary code for
Image compression and these compression image is converted in
pixel and then implemented in VLSI domain. This parameter is used
to achieve high processing speed and minimize the area and power.
The simplified adjusted binary code reduces the number of arithmetic
operation and achieved high processing speed. The color difference
preprocessing is also proposed to improve coding efficiency with
simple arithmetic operation. Although VLSI based FELICS
Algorithm provides effective solution for hardware architecture
design for regular pipelining data flow parallelism with four stages.
With two level parallelisms, consecutive pixels can be classified into
even and odd samples and the individual hardware engine is
dedicated for each one. This method can be further enhanced by
multilevel parallelisms.
Abstract: In previous study, technique to estimate a self-location by using a lunar image is proposed.We consider the improvement of the conventional method in consideration of FPGA implementationin this paper. Specifically, we introduce Artificial Bee Colony algorithm for reduction of search time.In addition, we use fixed point arithmetic to enable high-speed operation on FPGA.
Abstract: The most important component affecting the
efficiency of photovoltaic power systems are solar panels. In other
words, efficiency of these systems are significantly affected due to
the being low efficiency of solar panel. Thus, solar panels should be
operated under maximum power point conditions through a power
converter. In this study, design of boost converter has been carried
out with maximum power point tracking (MPPT) algorithm which is
incremental conductance (Inc-Cond). By using this algorithm,
importance of power converter in MPPT hardware design, impacts of
MPPT operation have been shown. It is worth noting that initial
operation point is the main criteria for determining the MPPT
performance. In addition, it is shown that if value of load resistance is
lower than critical value, failure operation is realized. For these
analyzes, direct duty control is used for simplifying the control.
Abstract: This paper presents optimization of makespan for ‘n’
jobs and ‘m’ machines flexible job shop scheduling problem with
sequence dependent setup time using genetic algorithm (GA)
approach. A restart scheme has also been applied to prevent the
premature convergence. Two case studies are taken into
consideration. Results are obtained by considering crossover
probability (pc = 0.85) and mutation probability (pm = 0.15). Five
simulation runs for each case study are taken and minimum value
among them is taken as optimal makespan. Results indicate that
optimal makespan can be achieved with more than one sequence of
jobs in a production order.
Abstract: Artificial Neural Network (ANN) can be trained using
back propagation (BP). It is the most widely used algorithm for
supervised learning with multi-layered feed-forward networks.
Efficient learning by the BP algorithm is required for many practical
applications. The BP algorithm calculates the weight changes of
artificial neural networks, and a common approach is to use a twoterm
algorithm consisting of a learning rate (LR) and a momentum
factor (MF). The major drawbacks of the two-term BP learning
algorithm are the problems of local minima and slow convergence
speeds, which limit the scope for real-time applications. Recently the
addition of an extra term, called a proportional factor (PF), to the
two-term BP algorithm was proposed. The third increases the speed
of the BP algorithm. However, the PF term also reduces the
convergence of the BP algorithm, and criteria for evaluating
convergence are required to facilitate the application of the three
terms BP algorithm. Although these two seem to be closely related,
as described later, we summarize various improvements to overcome
the drawbacks. Here we compare the different methods of
convergence of the new three-term BP algorithm.
Abstract: Many wireless sensor network applications require
K-coverage of the monitored area. In this paper, we propose a
scalable harmony search based algorithm in terms of execution
time, K-Coverage Enhancement Algorithm (KCEA), it attempts to
enhance initial coverage, and achieve the required K-coverage degree
for a specific application efficiently. Simulation results show that
the proposed algorithm achieves coverage improvement of 5.34%
compared to K-Coverage Rate Deployment (K-CRD), which achieves
1.31% when deploying one additional sensor. Moreover, the proposed
algorithm is more time efficient.
Abstract: This paper provides a comparative study on the
performances of standard PID and adaptive PID controllers tested on
travel angle of a 3-Degree-of-Freedom (3-DOF) Quanser bench-top
helicopter. Quanser, a well-known manufacturer of educational
bench-top helicopter has developed Proportional Integration
Derivative (PID) controller with Linear Quadratic Regulator (LQR)
for all travel, pitch and yaw angle of the bench-top helicopter. The
performance of the PID controller is relatively good; however, its
performance could also be improved if the controller is combined
with adaptive element. The objective of this research is to design
adaptive PID controller and then compare the performances of the
adaptive PID with the standard PID. The controller design and test is
focused on travel angle control only. Adaptive method used in this
project is self-tuning controller, which controller’s parameters are
updated online. Two adaptive algorithms those are pole-placement
and deadbeat have been chosen as the method to achieve optimal
controller’s parameters. Performance comparisons have shown that
the adaptive (deadbeat) PID controller has produced more desirable
performance compared to standard PID and adaptive (poleplacement).
The adaptive (deadbeat) PID controller attained very fast
settling time (5 seconds) and very small percentage of overshoot (5%
to 7.5%) for 10° to 30° step change of travel angle.
Abstract: In this work, neural networks methods MLP type were
applied to a database from an array of six sensors for the detection of
three toxic gases. The choice of the number of hidden layers and the
weight values are influential on the convergence of the learning
algorithm. We proposed, in this article, a mathematical formula to
determine the optimal number of hidden layers and good weight
values based on the method of back propagation of errors. The results
of this modeling have improved discrimination of these gases and
optimized the computation time. The model presented here has
proven to be an effective application for the fast identification of
toxic gases.
Abstract: The ad hoc networks are the future of wireless
technology as everyone wants fast and accurate error free information
so keeping this in mind Bit Error Rate (BER) and power is optimized
in this research paper by using the Genetic Algorithm (GA). The
digital modulation techniques used for this paper are Binary Phase
Shift Keying (BPSK), M-ary Phase Shift Keying (M-ary PSK), and
Quadrature Amplitude Modulation (QAM). This work is
implemented on Wireless Ad Hoc Networks (WLAN). Then it is
analyze which modulation technique is performing well to optimize
the BER and power of WLAN.
Abstract: In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation,
style, illumination, and can suffer from perspective distortion.
Pre-processing is performed to make the characters scale and
rotation invariant. Since text degradations can not be appropriately
defined using well-known geometric transformations such
as translation, rotation, affine transformation and shearing, we
use the whole character black pixels as our feature vector.
Classification is performed with minimum distance classifier
using the maximum likelihood criterion, which delivers very
promising Character Recognition Rate (CRR) of 89%. We
achieve considerably higher Word Recognition Rate (WRR) of
99% when using lower level linguistic knowledge about product
words during the recognition process.
Abstract: Taking the design tolerance into account, this paper
presents a novel efficient approach to generate iso-scallop tool path for
five-axis strip machining with a barrel cutter. The cutter location is
first determined on the scallop surface instead of the design surface,
and then the cutter is adjusted to locate the optimal tool position based
on the differential rotation of the tool axis and satisfies the design
tolerance simultaneously. The machining strip width and error are
calculated with the aid of the grazing curve of the cutter. Based on the
proposed tool positioning algorithm, the tool paths are generated by
keeping the scallop height formed by adjacent tool paths constant. An
example is conducted to confirm the validity of the proposed method.
Abstract: Femtocells are regarded as a milestone for next
generation cellular networks. As femtocells are deployed in an
unplanned manner, there is a chance of assigning same resource to
neighboring femtocells. This scenario may induce co-channel
interference and may seriously affect the service quality of
neighboring femtocells. In addition, the dominant transmit power of a
femtocell will induce co-tier interference to neighboring femtocells.
Thus to jointly handle co-tier and co-channel interference, we
propose an interference-free power and resource block allocation
(IFPRBA) algorithm for closely located, closed access femtocells.
Based on neighboring list, inter-femto-base station distance and
uplink noise power, the IFPRBA algorithm assigns non-interfering
power and resource to femtocells. The IFPRBA algorithm also
guarantees the quality of service to femtouser based on the
knowledge of resource requirement, connection type, and the
tolerable delay budget. Simulation result shows that the interference
power experienced in IFPRBA algorithm is below the tolerable
interference power and hence the overall service success ratio, PRB
efficiency and network throughput are maximum when compared to
conventional resource allocation framework for femtocell (RAFF)
algorithm.
Abstract: Real time image and video processing is a demand in
many computer vision applications, e.g. video surveillance, traffic
management and medical imaging. The processing of those video
applications requires high computational power. Thus, the optimal
solution is the collaboration of CPU and hardware accelerators. In
this paper, a Canny edge detection hardware accelerator is proposed.
Edge detection is one of the basic building blocks of video and image
processing applications. It is a common block in the pre-processing
phase of image and video processing pipeline. Our presented
approach targets offloading the Canny edge detection algorithm from
processing system (PS) to programmable logic (PL) taking the
advantage of High Level Synthesis (HLS) tool flow to accelerate the
implementation on Zynq platform. The resulting implementation
enables up to a 100x performance improvement through hardware
acceleration. The CPU utilization drops down and the frame rate
jumps to 60 fps of 1080p full HD input video stream.
Abstract: The star network is one of the promising
interconnection networks for future high speed parallel computers, it
is expected to be one of the future-generation networks. The star
network is both edge and vertex symmetry, it was shown to have
many gorgeous topological proprieties also it is owns hierarchical
structure framework. Although much of the research work has been
done on this promising network in literature, it still suffers from
having enough algorithms for load balancing problem. In this paper
we try to work on this issue by investigating and proposing an
efficient algorithm for load balancing problem for the star network.
The proposed algorithm is called Star Clustered Dimension Exchange
Method SCDEM to be implemented on the star network. The
proposed algorithm is based on the Clustered Dimension Exchange
Method (CDEM). The SCDEM algorithm is shown to be efficient in
redistributing the load balancing as evenly as possible among all
nodes of different factor networks.