Abstract: Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.
Abstract: Eight difference schemes and five limiters are applied to numerical computation of Riemann problem. The resolution of discontinuities of each scheme produced is compared. Numerical dissipation and its estimation are discussed. The result shows that the numerical dissipation of each scheme is vital to improve scheme-s accuracy and stability. MUSCL methodology is an effective approach to increase computational efficiency and resolution. Limiter should be selected appropriately by balancing compressive and diffusive performance.
Abstract: Design for cost (DFC) is a method that reduces life
cycle cost (LCC) from the angle of designers. Multiple domain
features mapping (MDFM) methodology was given in DFC. Using
MDFM, we can use design features to estimate the LCC. From the
angle of DFC, the design features of family cars were obtained, such
as all dimensions, engine power and emission volume. At the
conceptual design stage, cars- LCC were estimated using back
propagation (BP) artificial neural networks (ANN) method and
case-based reasoning (CBR). Hamming space was used to measure the
similarity among cases in CBR method. Levenberg-Marquardt (LM)
algorithm and genetic algorithm (GA) were used in ANN. The
differences of LCC estimation model between CBR and artificial
neural networks (ANN) were provided. ANN and CBR separately
each method has its shortcomings. By combining ANN and CBR
improved results accuracy was obtained. Firstly, using ANN selected
some design features that affect LCC. Then using LCC estimation
results of ANN could raise the accuracy of LCC estimation in CBR
method. Thirdly, using ANN estimate LCC errors and correct errors in
CBR-s estimation results if the accuracy is not enough accurate.
Finally, economically family cars and sport utility vehicle (SUV) was
given as LCC estimation cases using this hybrid approach combining
ANN and CBR.
Abstract: Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Abstract: This paper proposed high level feature for online Lao handwritten recognition. This feature must be high level enough so that the feature is not change when characters are written by different persons at different speed and different proportion (shorter or longer stroke, head, tail, loop, curve). In this high level feature, a character is divided in to sequence of curve segments where a segment start where curve reverse rotation (counter clockwise and clockwise). In each segment, following features are gathered cumulative change in direction of curve (- for clockwise), cumulative curve length, cumulative length of left to right, right to left, top to bottom and bottom to top ( cumulative change in X and Y axis of segment). This feature is simple yet robust for high accuracy recognition. The feature can be gather from parsing the original time sampling sequence X, Y point of the pen location without re-sampling. We also experiment on other segmentation point such as the maximum curvature point which was widely used by other researcher. Experiments results show that the recognition rates are at 94.62% in comparing to using maximum curvature point 75.07%. This is due to a lot of variations of turning points in handwritten.
Abstract: This paper reports a new pattern recognition approach for face recognition. The biological model of light receptors - cones and rods in human eyes and the way they are associated with pattern vision in human vision forms the basis of this approach. The functional model is simulated using CWD and WPD. The paper also discusses the experiments performed for face recognition using the features extracted from images in the AT & T face database. Artificial Neural Network and k- Nearest Neighbour classifier algorithms are employed for the recognition purpose. A feature vector is formed for each of the face images in the database and recognition accuracies are computed and compared using the classifiers. Simulation results show that the proposed method outperforms traditional way of feature extraction methods prevailing for pattern recognition in terms of recognition accuracy for face images with pose and illumination variations.
Abstract: Prediction of fault-prone modules provides one way to
support software quality engineering. Clustering is used to determine
the intrinsic grouping in a set of unlabeled data. Among various
clustering techniques available in literature K-Means clustering
approach is most widely being used. This paper introduces K-Means
based Clustering approach for software finding the fault proneness of
the Object-Oriented systems. The contribution of this paper is that it
has used Metric values of JEdit open source software for generation
of the rules for the categorization of software modules in the
categories of Faulty and non faulty modules and thereafter
empirically validation is performed. The results are measured in
terms of accuracy of prediction, probability of Detection and
Probability of False Alarms.
Abstract: In recent years, Radio Frequency Identification (RFID)
is followed with interest by many researches, especially for the
purpose of indoor positioning as the innate properties of RFID are
profitable for achieving it. A lot of algorithms or schemes are proposed
to be used in the RFID-based positioning system, but most of them are
lack of environmental consideration and it induces inaccuracy of
application. In this research, a lot of algorithms and schemes of RFID
indoor positioning are discussed to see whether effective or not on
application, and some rules are summarized for achieving accurate
positioning. On the other hand, a new term “Noise Factor" is involved
to describe the signal loss between the target and the obstacle. As a
result, experimental data can be obtained but not only simulation; and
the performance of the positioning system can be expressed
substantially.
Abstract: The fundamental aim of extended expansion concept is
to achieve higher work done which in turn leads to higher thermal
efficiency. This concept is compatible with the application of
turbocharger and LHR engine. The Low Heat Rejection engine was
developed by coating the piston crown, cylinder head inside with
valves and cylinder liner with partially stabilized zirconia coating of
0.5 mm thickness. Extended expansion in diesel engines is termed as
Miller cycle in which the expansion ratio is increased by reducing the
compression ratio by modifying the inlet cam for late inlet valve
closing. The specific fuel consumption reduces to an appreciable level
and the thermal efficiency of the extended expansion turbocharged
LHR engine is improved.
In this work, a thermodynamic model was formulated and
developed to simulate the LHR based extended expansion
turbocharged direct injection diesel engine. It includes a gas flow
model, a heat transfer model, and a two zone combustion model. Gas
exchange model is modified by incorporating the Miller cycle, by
delaying inlet valve closing timing which had resulted in considerable
improvement in thermal efficiency of turbocharged LHR engines. The
heat transfer model, calculates the convective and radiative heat
transfer between the gas and wall by taking into account of the
combustion chamber surface temperature swings. Using the two-zone
combustion model, the combustion parameters and the chemical
equilibrium compositions were determined. The chemical equilibrium
compositions were used to calculate the Nitric oxide formation rate by
assuming a modified Zeldovich mechanism. The accuracy of this
model is scrutinized against actual test results from the engine. The
factors which affect thermal efficiency and exhaust emissions were
deduced and their influences were discussed. In the final analysis it is
seen that there is an excellent agreement in all of these evaluations.
Abstract: In this paper, we present an innovative scheme of
blindly extracting message bits from an image distorted by an attack.
Support Vector Machine (SVM) is used to nonlinearly classify the
bits of the embedded message. Traditionally, a hard decoder is used
with the assumption that the underlying modeling of the Discrete
Cosine Transform (DCT) coefficients does not appreciably change.
In case of an attack, the distribution of the image coefficients is
heavily altered. The distribution of the sufficient statistics at the
receiving end corresponding to the antipodal signals overlap and a
simple hard decoder fails to classify them properly. We are
considering message retrieval of antipodal signal as a binary
classification problem. Machine learning techniques like SVM is
used to retrieve the message, when certain specific class of attacks is
most probable. In order to validate SVM based decoding scheme, we
have taken Gaussian noise as a test case. We generate a data set using
125 images and 25 different keys. Polynomial kernel of SVM has
achieved 100 percent accuracy on test data.
Abstract: Electromagnetic flow meter by measuring the varying of magnetic flux, which is related to the velocity of conductive flow, can measure the rate of fluids very carefully and precisely. Electromagnetic flow meter operation is based on famous Faraday's second Law. In these equipments, the constant magnetostatic field is produced by electromagnet (winding around the tube) outside of pipe and inducting voltage that is due to conductive liquid flow is measured by electrodes located on two end side of the pipe wall. In this research, we consider to 2-dimensional mathematical model that can be solved by numerical finite difference (FD) solution approach to calculate induction potential between electrodes. The fundamental concept to design the electromagnetic flow meter, exciting winding and simulations are come out by using MATLAB and PDE-Tool software. In the last stage, simulations results will be shown for improvement and accuracy of technical provision.
Abstract: Several numerical schemes utilizing central difference
approximations have been developed to solve the Goursat problem.
However, in a recent years compact discretization methods which
leads to high-order finite difference schemes have been used since it
is capable of achieving better accuracy as well as preserving certain
features of the equation e.g. linearity. The basic idea of the new
scheme is to find the compact approximations to the derivative terms
by differentiating centrally the governing equations. Our primary
interest is to study the performance of the new scheme when applied
to two Goursat partial differential equations against the traditional
finite difference scheme.
Abstract: Intrusion detection is a mechanism used to protect a
system and analyse and predict the behaviours of system users. An
ideal intrusion detection system is hard to achieve due to
nonlinearity, and irrelevant or redundant features. This study
introduces a new anomaly-based intrusion detection model. The
suggested model is based on particle swarm optimisation and
nonlinear, multi-class and multi-kernel support vector machines.
Particle swarm optimisation is used for feature selection by applying
a new formula to update the position and the velocity of a particle;
the support vector machine is used as a classifier. The proposed
model is tested and compared with the other methods using the KDD
CUP 1999 dataset. The results indicate that this new method achieves
better accuracy rates than previous methods.
Abstract: Visual attention allows user to select the most relevant
information to ongoing behaviour. This paper presents a study on; i)
the performance of people measurements, ii) accurateness of people
measurement of the peaks that correspond to chemical quantities
from the Magnetic Resonance Spectroscopy (MRS) graphs and iii)
affects of people measurements to the algorithm-based diagnosis.
Participant-s eye-movement was recorded using eye-tracker tool
(Eyelink II). This experiment involves three participants for
examining 20 MRS graphs to estimate the peaks of chemical
quantities which indicate the abnormalities associated with
Cerebellar Tumours (CT). The status of each MRS is verified by
using decision algorithm. Analysis involves determination of
humans-s eye movement pattern in measuring the peak of
spectrograms, scan path and determining the relationship of
distributions of fixation durations with the accuracy of measurement.
In particular, the eye-tracking data revealed which aspects of the
spectrogram received more visual attention and in what order they
were viewed. This preliminary investigation provides a proof of
concept for use of the eye tracking technology as the basis for
expanded CT diagnosis.
Abstract: A strip domain decomposition parallel algorithm for fast direct Poisson solver is presented on a 3D Cartesian staggered grid. The parallel algorithm follows the principles of sequential algorithm for fast direct Poisson solver. Both Dirichlet and Neumann boundary conditions are addressed. Several test cases are likewise addressed in order to shed light on accuracy and efficiency in the strip domain parallelization algorithm. Actually the current implementation shows a very high efficiency when dealing with a large grid mesh up to 3.6 * 109 under massive parallel approach, which explicitly demonstrates that the proposed algorithm is ready for massive parallel computing.
Abstract: The genetic algorithm (GA) based solution techniques
are found suitable for optimization because of their ability of
simultaneous multidimensional search. Many GA-variants have been
tried in the past to solve optimal power flow (OPF), one of the
nonlinear problems of electric power system. The issues like
convergence speed and accuracy of the optimal solution obtained
after number of generations using GA techniques and handling
system constraints in OPF are subjects of discussion. The results
obtained for GA-Fuzzy OPF on various power systems have shown
faster convergence and lesser generation costs as compared to other
approaches. This paper presents an enhanced GA-Fuzzy OPF (EGAOPF)
using penalty factors to handle line flow constraints and load
bus voltage limits for both normal network and contingency case
with congestion. In addition to crossover and mutation rate
adaptation scheme that adapts crossover and mutation probabilities
for each generation based on fitness values of previous generations, a
block swap operator is also incorporated in proposed EGA-OPF. The
line flow limits and load bus voltage magnitude limits are handled by
incorporating line overflow and load voltage penalty factors
respectively in each chromosome fitness function. The effects of
different penalty factors settings are also analyzed under contingent
state.
Abstract: Simulation accuracy by recent dynamic vehicle
simulation multidimensional expression significantly has progressed
and acceptable results not only for passive vehicles but also for
active vehicles normally equipped with advanced electronic
components is also provided. Recently, one of the subjects that has it
been considered, is increasing the safety car in design. Therefore,
many efforts have been done to increase vehicle stability especially
in the turn. One of the most important efforts is adjusting the camber
angle in the car suspension system. Optimum control camber angle in
addition to the vehicle stability is effective in the wheel adhesion on
road, reducing rubber abrasion and acceleration and braking. Since
the increase or decrease in the camber angle impacts on the stability
of vehicles, in this paper, a car suspension system mechanism is
introduced that could be adjust camber angle and the mechanism is
application and also inexpensive. In order to reach this purpose, in
this paper, a passive double wishbone suspension system with
variable camber angle is introduced and then variable camber
mechanism designed and analyzed for study the designed system
performance, this mechanism is modeled in Visual Nastran software
and kinematic analysis is revealed.
Abstract: In this paper, the melting of a semi-infinite body as a
result of a moving laser beam has been studied. Because the Fourier
heat transfer equation at short times and large dimensions does not
have sufficient accuracy; a non-Fourier form of heat transfer
equation has been used. Due to the fact that the beam is moving in x
direction, the temperature distribution and the melting pool shape are
not asymmetric. As a result, the problem is a transient threedimensional
problem. Therefore, thermophysical properties such as
heat conductivity coefficient, density and heat capacity are functions
of temperature and material states. The enthalpy technique, used for
the solution of phase change problems, has been used in an explicit
finite volume form for the hyperbolic heat transfer equation. This
technique has been used to calculate the transient temperature
distribution in the semi-infinite body and the growth rate of the melt
pool. In order to validate the numerical results, comparisons were
made with experimental data. Finally, the results of this paper were
compared with similar problem that has used the Fourier theory. The
comparison shows the influence of infinite speed of heat propagation
in Fourier theory on the temperature distribution and the melt pool
size.
Abstract: Diagnostic and detection of the arterial stiffness is
very important; which gives indication of the associated increased risk of cardiovascular diseases. To make a cheap and easy method for general screening technique to avoid the future cardiovascular
complexes , due to the rising of the arterial stiffness ; a proposed algorithm depending on photoplethysmogram to be used. The
photoplethysmograph signals would be processed in MATLAB. The
signal will be filtered, baseline wandering removed, peaks and
valleys detected and normalization of the signals should be achieved
.The area under the catacrotic phase of the photoplethysmogram
pulse curve is calculated using trapezoidal algorithm ; then will used
in cooperation with other parameters such as age, height, blood
pressure in neural network for arterial stiffness detection. The Neural
network were implemented with sensitivity of 80%, accuracy 85%
and specificity of 90% were got from the patients data. It is
concluded that neural network can detect the arterial STIFFNESS
depending on risk factor parameters.
Abstract: The myoelectric signal (MES) is one of the Biosignals
utilized in helping humans to control equipments. Recent approaches
in MES classification to control prosthetic devices employing pattern
recognition techniques revealed two problems, first, the classification
performance of the system starts degrading when the number of
motion classes to be classified increases, second, in order to solve the
first problem, additional complicated methods were utilized which
increase the computational cost of a multifunction myoelectric
control system. In an effort to solve these problems and to achieve a
feasible design for real time implementation with high overall
accuracy, this paper presents a new method for feature extraction in
MES recognition systems. The method works by extracting features
using Wavelet Packet Transform (WPT) applied on the MES from
multiple channels, and then employs Fuzzy c-means (FCM)
algorithm to generate a measure that judges on features suitability for
classification. Finally, Principle Component Analysis (PCA) is
utilized to reduce the size of the data before computing the
classification accuracy with a multilayer perceptron neural network.
The proposed system produces powerful classification results (99%
accuracy) by using only a small portion of the original feature set.