Abstract: Background modeling and subtraction in video
analysis has been widely used as an effective method for moving
objects detection in many computer vision applications. Recently, a
large number of approaches have been developed to tackle different
types of challenges in this field. However, the dynamic background
and illumination variations are the most frequently occurred problems
in the practical situation. This paper presents a favorable two-layer
model based on codebook algorithm incorporated with local binary
pattern (LBP) texture measure, targeted for handling dynamic
background and illumination variation problems. More specifically,
the first layer is designed by block-based codebook combining with
LBP histogram and mean value of each RGB color channel. Because
of the invariance of the LBP features with respect to monotonic
gray-scale changes, this layer can produce block wise detection results
with considerable tolerance of illumination variations. The pixel-based
codebook is employed to reinforce the precision from the output of the
first layer which is to eliminate false positives further. As a result, the
proposed approach can greatly promote the accuracy under the
circumstances of dynamic background and illumination changes.
Experimental results on several popular background subtraction
datasets demonstrate very competitive performance compared to
previous models.
Abstract: Small-size and low-power sensors with sensing, signal
processing and wireless communication capabilities is suitable for the
wireless sensor networks. Due to the limited resources and battery
constraints, complex routing algorithms used for the ad-hoc networks
cannot be employed in sensor networks. In this paper, we propose
node-disjoint multi-path hexagon-based routing algorithms in wireless
sensor networks. We suggest the details of the algorithm and compare
it with other works. Simulation results show that the proposed scheme
achieves better performance in terms of efficiency and message
delivery ratio.
Abstract: Online measurement of the product quality is a
challenging task in cement production, especially in the production of
Celitement, a novel environmentally friendly hydraulic binder. The
mineralogy and chemical composition of clinker in ordinary Portland
cement production is measured by X-ray diffraction (XRD) and
X-ray fluorescence (XRF), where only crystalline constituents can be
detected. But only a small part of the Celitement components can be
measured via XRD, because most constituents have an amorphous
structure. This paper describes the development of algorithms
suitable for an on-line monitoring of the final processing step of
Celitement based on NIR-data. For calibration intermediate products
were dried at different temperatures and ground for variable
durations. The products were analyzed using XRD and
thermogravimetric analyses together with NIR-spectroscopy to
investigate the dependency between the drying and the milling
processes on one and the NIR-signal on the other side. As a result,
different characteristic parameters have been defined. A short
overview of the Celitement process and the challenging tasks of the
online measurement and evaluation of the product quality will be
presented. Subsequently, methods for systematic development of
near-infrared calibration models and the determination of the final
calibration model will be introduced. The application of the model on
experimental data illustrates that NIR-spectroscopy allows for a quick
and sufficiently exact determination of crucial process parameters.
Abstract: Average temperatures worldwide are expected to
continue to rise. At the same time, major cities in developing
countries are becoming increasingly populated and polluted.
Governments are tasked with the problem of overheating and air
quality in residential buildings. This paper presents the development
of a model, which is able to estimate the occupant exposure
to extreme temperatures and high air pollution within domestic
buildings. Building physics simulations were performed using the
EnergyPlus building physics software. An accurate metamodel is
then formed by randomly sampling building input parameters and
training on the outputs of EnergyPlus simulations. Metamodels are
used to vastly reduce the amount of computation time required when
performing optimisation and sensitivity analyses. Neural Networks
(NNs) have been compared to a Radial Basis Function (RBF)
algorithm when forming a metamodel. These techniques were
implemented using the PyBrain and scikit-learn python libraries,
respectively. NNs are shown to perform around 15% better than RBFs
when estimating overheating and air pollution metrics modelled by
EnergyPlus.
Abstract: Social networking sites such as Twitter and Facebook
attracts over 500 million users across the world, for those users, their
social life, even their practical life, has become interrelated. Their
interaction with social networking has affected their life forever.
Accordingly, social networking sites have become among the main
channels that are responsible for vast dissemination of different kinds
of information during real time events. This popularity in Social
networking has led to different problems including the possibility of
exposing incorrect information to their users through fake accounts
which results to the spread of malicious content during life events.
This situation can result to a huge damage in the real world to the
society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the
fake accounts on Twitter. The study determines the minimized set of
the main factors that influence the detection of the fake accounts on
Twitter, and then the determined factors are applied using different
classification techniques. A comparison of the results of these
techniques has been performed and the most accurate algorithm is
selected according to the accuracy of the results. The study has been
compared with different recent researches in the same area; this
comparison has proved the accuracy of the proposed study. We claim
that this study can be continuously applied on Twitter social network
to automatically detect the fake accounts; moreover, the study can be
applied on different social network sites such as Facebook with minor
changes according to the nature of the social network which are
discussed in this paper.
Abstract: This paper focuses on the mathematical modeling for
solidification of Al alloy in a cube mold cavity to study the
solidification behavior of casting process. The parametric
investigation of solidification process inside the cavity was
performed by using computational solidification/melting model
coupled with Volume of fluid (VOF) model. The implicit filling
algorithm is used in this study to understand the overall process from
the filling stage to solidification in a model metal casting process.
The model is validated with past studied at same conditions. The
solidification process is analyzed by including the effect of pouring
velocity as well as natural convection from the wall and geometry of
the cavity. These studies show the possibility of various defects
during solidification process.
Abstract: Bezier curves have useful properties for path
generation problem, for instance, it can generate the reference
trajectory for vehicles to satisfy the path constraints. Both algorithms
join cubic Bezier curve segment smoothly to generate the path. Some
of the useful properties of Bezier are curvature. In mathematics,
curvature is the amount by which a geometric object deviates from
being flat, or straight in the case of a line. Another extrinsic example
of curvature is a circle, where the curvature is equal to the reciprocal
of its radius at any point on the circle. The smaller the radius, the
higher the curvature thus the vehicle needs to bend sharply. In this
study, we use Bezier curve to fit highway-like curve. We use
different approach to find the best approximation for the curve so that
it will resembles highway-like curve. We compute curvature value by
analytical differentiation of the Bezier Curve. We will then compute
the maximum speed for driving using the curvature information
obtained. Our research works on some assumptions; first, the Bezier
curve estimates the real shape of the curve which can be verified
visually. Even though, fitting process of Bezier curve does not
interpolate exactly on the curve of interest, we believe that the
estimation of speed are acceptable. We verified our result with the
manual calculation of the curvature from the map.
Abstract: In this paper, according to the classical algorithm
LSQR for solving the least-squares problem, an iterative method is
proposed for least-squares solution of constrained matrix equation. By
using the Kronecker product, the matrix-form LSQR is presented to
obtain the like-minimum norm and minimum norm solutions in a
constrained matrix set for the symmetric arrowhead matrices. Finally,
numerical examples are also given to investigate the performance.
Abstract: Prior literature in the field of adaptive and
personalized learning sequence in e-learning have proposed and
implemented various mechanisms to improve the learning process
such as individualization and personalization, but complex to
implement due to expensive algorithmic programming and need of
extensive and prior data. The main objective of personalizing
learning sequence is to maximize learning by dynamically selecting
the closest teaching operation in order to achieve the learning
competency of learner. In this paper, a revolutionary technique has
been proposed and tested to perform individualization and
personalization using modified reversed roulette wheel selection
algorithm that runs at O(n). The technique is simpler to implement
and is algorithmically less expensive compared to other revolutionary
algorithms since it collects the dynamic real time performance matrix
such as examinations, reviews, and study to form the RWSA single
numerical fitness value. Results show that the implemented system is
capable of recommending new learning sequences that lessens time
of study based on student's prior knowledge and real performance
matrix.
Abstract: In this paper, a learning algorithm using neuronal networks to improve the roll stability and prevent the rollover in a single unit heavy vehicle is proposed. First, LQR control to keep balanced normalized rollovers, between front and rear axles, below the unity, then a data collected from this controller is used as a training basis of a neuronal regulator. The ANN controller is thereafter applied for the nonlinear side force model, and gives satisfactory results than the LQR one.
Abstract: Inspired by the Formula-1 competition, IMechE
(Institute of Mechanical Engineers) and Formula SAE (Society of
Mechanical Engineers) organize annual competitions for University
and College students worldwide to compete with a single-seat racecar
they have designed and built. Design of the chassis or the frame is a
key component of the competition because the weight and stiffness
properties are directly related with the performance of the car and the
safety of the driver. In addition, a reduced weight of the chassis has
direct influence on the design of other components in the car. Among
others, it improves the power to weight ratio and the aerodynamic
performance. As the power output of the engine or the battery
installed in the car is limited to 80 kW, increasing the power to
weight ratio demands reduction of the weight of the chassis, which
represents the major part of the weight of the car. In order to reduce
the weight of the car, ION Racing team from University of
Stavanger, Norway, opted for a monocoque design. To ensure
fulfilment of the competition requirements of the chassis, the
monocoque design should provide sufficient torsional stiffness and
absorb the impact energy in case of possible collision. The study reported in this article is based on the requirements for
Formula Student competition. As part of this study, diverse
mechanical tests were conducted to determine the mechanical
properties and performances of the monocoque design. Upon a
comprehensive theoretical study of the mechanical properties of
sandwich composite materials and the requirements of monocoque
design in the competition rules, diverse tests were conducted
including 3-point bending test, perimeter shear test and test for
absorbed energy. The test panels were homemade and prepared with
equivalent size of the side impact zone of the monocoque, i.e. 275
mm x 500 mm, so that the obtained results from the tests can be
representative. Different layups of the test panels with identical core
material and the same number of layers of carbon fibre were tested
and compared. Influence of the core material thickness was also
studied. Furthermore, analytical calculations and numerical analysis
were conducted to check compliance to the stated rules for Structural
Equivalency with steel grade SAE/AISI 1010. The test results were
also compared with calculated results with respect to bending and
torsional stiffness, energy absorption, buckling, etc. The obtained results demonstrate that the material composition
and strength of the composite material selected for the monocoque
design has equivalent structural properties as a welded frame and thus
comply with the competition requirements. The developed analytical
calculation algorithms and relations will be useful for future
monocoque designs with different lay-ups and compositions.
Abstract: The change of conditions for production companies in
high-wage countries is characterized by the globalization of
competition and the transition of a supplier´s to a buyer´s market. The
companies need to face the challenges of reacting flexibly to these
changes. Due to the significant and increasing degree of automation,
assembly has become the most expensive production process.
Regarding the reduction of production cost, assembly consequently
offers a considerable rationalizing potential. Therefore, an
aerodynamic feeding system has been developed at the Institute of
Production Systems and Logistics (IFA), Leibniz Universitaet
Hannover. This system has been enabled to adjust itself by using a
genetic algorithm. The longer this genetic algorithm is executed the
better is the feeding quality. In this paper, the relation between the
system´s setting time and the feeding quality is observed and a
function which enables the user to achieve the minimum of the total
feeding time is presented.
Abstract: One of the global combinatorial optimization
problems in machine learning is feature selection. It concerned with
removing the irrelevant, noisy, and redundant data, along with
keeping the original meaning of the original data. Attribute reduction
in rough set theory is an important feature selection method. Since
attribute reduction is an NP-hard problem, it is necessary to
investigate fast and effective approximate algorithms. In this paper,
we proposed two feature selection mechanisms based on memetic
algorithms (MAs) which combine the genetic algorithm with a fuzzy
record to record travel algorithm and a fuzzy controlled great deluge
algorithm, to identify a good balance between local search and
genetic search. In order to verify the proposed approaches, numerical
experiments are carried out on thirteen datasets. The results show that
the MAs approaches are efficient in solving attribute reduction
problems when compared with other meta-heuristic approaches.
Abstract: The aim of this paper is to propose a general
framework for storing, analyzing, and extracting knowledge from
two-dimensional echocardiographic images, color Doppler images,
non-medical images, and general data sets. A number of high
performance data mining algorithms have been used to carry out this
task. Our framework encompasses four layers namely physical
storage, object identification, knowledge discovery, user level.
Techniques such as active contour model to identify the cardiac
chambers, pixel classification to segment the color Doppler echo
image, universal model for image retrieval, Bayesian method for
classification, parallel algorithms for image segmentation, etc., were
employed. Using the feature vector database that have been
efficiently constructed, one can perform various data mining tasks
like clustering, classification, etc. with efficient algorithms along
with image mining given a query image. All these facilities are
included in the framework that is supported by state-of-the-art user
interface (UI). The algorithms were tested with actual patient data
and Coral image database and the results show that their performance
is better than the results reported already.
Abstract: The power electronic components within Electric Vehicles (EV) need to operate in several important modes. Some modes directly influence safety, while others influence vehicle performance. Given the variety of functions and operational modes required of the power electronics, it needs to meet efficiency requirements to minimize power losses. Another challenge in the control and construction of such systems is the ability to support bidirectional power flow. This paper considers the construction, operation, and feasibility of available converters for electric vehicles with feasible configurations of electrical buses and loads. This paper describes logic and control signals for the converters for different operations conditions based on the efficiency and energy usage bases.
Abstract: Considering the challenges of short product life cycles
and growing variant diversity, cost minimization and manufacturing
flexibility increasingly gain importance to maintain a competitive
edge in today’s global and dynamic markets. In this context, an
aerodynamic part feeding system for high-speed industrial assembly
applications has been developed at the Institute of Production
Systems and Logistics (IFA), Leibniz Universitaet Hannover. The
aerodynamic part feeding system outperforms conventional systems
with respect to its process safety, reliability, and operating speed. In
this paper, a multi-objective optimisation of the aerodynamic feeding
system regarding the orientation rate, the feeding velocity, and the
required nozzle pressure is presented.
Abstract: In this paper, a robust fault detection and isolation
(FDI) scheme is developed to monitor a multivariable nonlinear
chemical process called the Chylla-Haase polymerization reactor,
when it is under the cascade PI control. The scheme employs a radial
basis function neural network (RBFNN) in an independent mode to
model the process dynamics, and using the weighted sum-squared
prediction error as the residual. The Recursive Orthogonal Least
Squares algorithm (ROLS) is employed to train the model to
overcome the training difficulty of the independent mode of the
network. Then, another RBFNN is used as a fault classifier to isolate
faults from different features involved in the residual vector. Several
actuator and sensor faults are simulated in a nonlinear simulation of
the reactor in Simulink. The scheme is used to detect and isolate the
faults on-line. The simulation results show the effectiveness of the
scheme even the process is subjected to disturbances and
uncertainties including significant changes in the monomer feed rate,
fouling factor, impurity factor, ambient temperature, and
measurement noise. The simulation results are presented to illustrate
the effectiveness and robustness of the proposed method.
Abstract: The aim of this work is to detect geometrical shape
objects in an image. In this paper, the object is considered to be as a
circle shape. The identification requires find three characteristics,
which are number, size, and location of the object. To achieve the
goal of this work, this paper presents an algorithm that combines
from some of statistical approaches and image analysis techniques.
This algorithm has been implemented to arrive at the major
objectives in this paper. The algorithm has been evaluated by using
simulated data, and yields good results, and then it has been applied
to real data.
Abstract: Finding the optimal 3D path of an aerial vehicle under
flight mechanics constraints is a major challenge, especially when
the algorithm has to produce real time results in flight. Kinematics
models and Pythagorian Hodograph curves have been widely used
in mobile robotics to solve this problematic. The level of difficulty
is mainly driven by the number of constraints to be saturated at the
same time while minimizing the total length of the path. In this paper,
we suggest a pragmatic algorithm capable of saturating at the same
time most of dimensioning helicopter 3D trajectories’ constraints
like: curvature, curvature derivative, torsion, torsion derivative, climb
angle, climb angle derivative, positions. The trajectories generation
algorithm is able to generate versatile complex 3D motion primitives
feasible by a helicopter with parameterization of the curvature and the
climb angle. An upper ”motion primitives’ concatenation” algorithm
is presented based. In this article we introduce a new way of designing
three-dimensional trajectories based on what we call the ”Dubins
gliding symmetry conjecture”. This extremely performing algorithm
will be soon integrated to a real-time decisional system dealing with
inflight safety issues.
Abstract: In this paper, we propose the variational EM inference
algorithm for the multi-class Gaussian process classification model
that can be used in the field of human behavior recognition. This
algorithm can drive simultaneously both a posterior distribution of a
latent function and estimators of hyper-parameters in a Gaussian
process classification model with multiclass. Our algorithm is based
on the Laplace approximation (LA) technique and variational EM
framework. This is performed in two steps: called expectation and
maximization steps. First, in the expectation step, using the Bayesian
formula and LA technique, we derive approximately the posterior
distribution of the latent function indicating the possibility that each
observation belongs to a certain class in the Gaussian process
classification model. Second, in the maximization step, using a derived
posterior distribution of latent function, we compute the maximum
likelihood estimator for hyper-parameters of a covariance matrix
necessary to define prior distribution for latent function. These two
steps iteratively repeat until a convergence condition satisfies.
Moreover, we apply the proposed algorithm with human action
classification problem using a public database, namely, the KTH
human action data set. Experimental results reveal that the proposed
algorithm shows good performance on this data set.