Abstract: In this paper, a pipelined version of genetic algorithm,
called PLGA, and a corresponding hardware platform are described.
The basic operations of conventional GA (CGA) are made pipelined
using an appropriate selection scheme. The selection operator, used
here, is stochastic in nature and is called SA-selection. This helps
maintaining the basic generational nature of the proposed pipelined
GA (PLGA). A number of benchmark problems are used to compare
the performances of conventional roulette-wheel selection and the
SA-selection. These include unimodal and multimodal functions with
dimensionality varying from very small to very large. It is seen that
the SA-selection scheme is giving comparable performances with
respect to the classical roulette-wheel selection scheme, for all the
instances, when quality of solutions and rate of convergence are considered.
The speedups obtained by PLGA for different benchmarks
are found to be significant. It is shown that a complete hardware
pipeline can be developed using the proposed scheme, if parallel
evaluation of the fitness expression is possible. In this connection
a low-cost but very fast hardware evaluation unit is described.
Results of simulation experiments show that in a pipelined hardware
environment, PLGA will be much faster than CGA. In terms of
efficiency, PLGA is found to outperform parallel GA (PGA) also.
Abstract: This paper proposes new hybrid approaches for face
recognition. Gabor wavelets representation of face images is an
effective approach for both facial action recognition and face
identification. Perform dimensionality reduction and linear
discriminate analysis on the down sampled Gabor wavelet faces can
increase the discriminate ability. Nearest feature space is extended to
various similarity measures. In our experiments, proposed Gabor
wavelet faces combined with extended neural net feature space
classifier shows very good performance, which can achieve 93 %
maximum correct recognition rate on ORL data set without any preprocessing
step.
Abstract: Wheel-running type moving robot has the restriction
on the moving range caused by obstacles or stairs. Solving this
weakness, we studied the development of moving robot using airship.
Our airship robot moves by recognizing arrow marks on the path. To
have the airship robot recognize arrow marks, we used edge-based
template matching. To control propeller units, we used PID and PD
controller. The results of experiments demonstrated that the airship
robot can move along the marks and can go up and down the stairs. It is
shown the possibility that airship robot can become a robot which can
move at wide range facilities.
Abstract: This paper describes the design and results of FROID,
an outbound intrusion detection system built with agent technology
and supported by an attacker-centric ontology. The prototype
features a misuse-based detection mechanism that identifies remote
attack tools in execution. Misuse signatures composed of attributes
selected through entropy analysis of outgoing traffic streams and
process runtime data are derived from execution variants of attack
programs. The core of the architecture is a mesh of self-contained
detection cells organized non-hierarchically that group agents in a
functional fashion. The experiments show performance gains when
the ontology is enabled as well as an increase in accuracy achieved
when correlation cells combine detection evidence received from
independent detection cells.
Abstract: The objective of this research is parameters optimized
of the stair shape workpiece which is cut by CNC Wire-Cut EDM
(WEDW). The experiment material is SKD-11 steel of stair-shaped
with variable height workpiece 10, 20, 30 and 40 mm. with the same
10 mm. thickness are cut by Sodick's CNC Wire-Cut EDM model
AD325L.
The experiments are designed by 3k full factorial experimental
design at 3 level 2 factors and 9 experiments with 2 replicate. The
selected two factor are servo voltage (SV) and servo feed rate (SF)
and the response is cutting thickness error. The experiment is divided
in two experiments. The first experiment determines the significant
effective factor at confidential interval 95%. The SV factor is the
significant effective factor from first result. In order to result smallest
cutting thickness error of workpieces is 17 micron with the SV value
is 46 volt. Also show that the lower SV value, the smaller different
thickness error of workpiece. Then the second experiment is done to
reduce different cutting thickness error of workpiece as small as
possible by lower SV. The second experiment result show the
significant effective factor at confidential interval 95% is the SV
factor and the smallest cutting thickness error of workpieces reduce
to 11 micron with the experiment SV value is 36 volt.
Abstract: A social network is a set of people or organization or other social entities connected by some form of relationships. Analysis of social network broadly elaborates visual and mathematical representation of that relationship. Web can also be considered as a social network. This paper presents an innovative approach to analyze a social network using a variant of existing ant colony optimization algorithm called as Clever Ant Colony Metaphor. Experiments are performed and interesting findings and observations have been inferred based on the proposed model.
Abstract: Cutting fluids, usually in the form of a liquid, are
applied to the chip formation zone in order to improve the cutting
conditions. Cutting fluid can be expensive and represents a biological
and environmental hazard that requires proper recycling and
disposal, thus adding to the cost of the machining operation. For
these reasons dry cutting or dry machining has become an
increasingly important approach; in dry machining no coolant or
lubricant is used. This paper discussed the effect of the dry cutting on
cutting force and tool life when machining aerospace materials
(Haynes 242) with using two different coated carbide cutting tools
(TiAlN and TiN/MT-TiCN/TiN). Response surface method (RSM)
was used to minimize the number of experiments. ParTiAlN Swarm
Optimisation (PSO) models were developed to optimize the
machining parameters (cutting speed, federate and axial depth) and
obtain the optimum cutting force and tool life. It observed that
carbide cutting tool coated with TiAlN performed better in dry
cutting compared with TiN/MT-TiCN/TiN. On other hand, TiAlN
performed more superior with using of 100 % water soluble coolant.
Due to the high temperature produced by aerospace materials, the
cutting tool still required lubricant to sustain the heat transfer from
the workpiece.
Abstract: Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.
Abstract: The experiments were performed in a batch set up
under different concentrations of Cu (II) (0.2 g.l-1 to 0.9 g.l-1), pH (4-
6), temperatures (20oC – 40oC) with varying teak leaves powder (as
biosorbent) dosage of 0.3 g.l-1 to 0.5 g.l-1. The kinetics of interactions
were tested with pseudo first order Lagergran equation and the value
for k1 was found to be 6.909 x 10-3 min-1. The biosorption data gave
a good fit with Langmuir and Fruendlich isotherms and the Langmuir
monolayer capacity (qm) was found to be 166.78 mg. g-1. Similarly
the Freundlich adsorption capacity (Kf) was estimated as 2.49 l g-1.
The mean values of the thermodynamic parameters ΔH, ΔS, and ΔG
were -62.42 KJ. mol-1, -0.219 KJ.mol-1 K-1 and -1.747 KJ.mol-1 at
293 K from a solution containing 0.4 g l-1 of Cu(II) showing the
biosorption to be thermodynamically favourable. These results show
good potentiality of using teak leaves as a biosorbent for the removal
of Cu(II) from aqueous solutions.
Abstract: Face authentication for access control is a face
membership authentication which passes the person of the incoming
face if he turns out to be one of an enrolled person based on face
recognition or rejects if not. Face membership authentication belongs
to the two class classification problem where SVM(Support Vector
Machine) has been successfully applied and shows better performance
compared to the conventional threshold-based classification. However,
most of previous SVMs have been trained using image feature vectors
extracted from face images of each class member(enrolled
class/unenrolled class) so that they are not robust to variations in
illuminations, poses, and facial expressions and much affected by
changes in member configuration of the enrolled class
In this paper, we propose an effective face membership
authentication method based on SVM using class discriminating
features which represent an incoming face image-s associability with
each class distinctively. These class discriminating features are weakly
related with image features so that they are less affected by variations
in illuminations, poses and facial expression.
Through experiments, it is shown that the proposed face
membership authentication method performs better than the threshold
rule-based or the conventional SVM-based authentication methods and
is relatively less affected by changes in member size and membership.
Abstract: This paper presents a methodical approach for designing and optimizing process parameters in oil blending industries. Twenty seven replicated experiments were conducted for production of A-Z crown super oil (SAE20W/50) employing L9 orthogonal array to establish process response parameters. Power law model was fitted to experimental data and the obtained model was optimized applying the central composite design (CCD) of response surface methodology (RSM). Quadratic model was found to be significant for production of A-Z crown supper oil. The study recognized and specified four new lubricant formulations that conform to ISO oil standard in the course of analyzing the batch productions of A-Z crown supper oil as: L1: KV = 21.8293Cst, BS200 = 9430.00Litres, Ad102=11024.00Litres, PVI = 2520 Litres, L2: KV = 22.513Cst, BS200 = 12430.00 Litres, Ad102 = 11024.00 Litres, PVI = 2520 Litres, L3: KV = 22.1671Cst, BS200 = 9430.00 Litres, Ad102 = 10481.00 Litres, PVI= 2520 Litres, L4: KV = 22.8605Cst, BS200 = 12430.00 Litres, Ad102 = 10481.00 Litres, PVI = 2520 Litres. The analysis of variance showed that quadratic model is significant for kinematic viscosity production while the R-sq value statistic of 0.99936 showed that the variation of kinematic viscosity is due to its relationship with the control factors. This study therefore resulted to appropriate blending proportions of lubricants base oil and additives and recommends the optimal kinematic viscosity of A-Z crown super oil (SAE20W/50) to be 22.86Cst.
Abstract: Steady state experiments have been conducted for
natural and mixed convection heat transfer, from five different sized
protruding discrete heat sources, placed at the bottom position on a
PCB and mounted on a vertical channel. The characteristic length (
Lh ) of heat sources vary from 0.005 to 0.011 m. The study has been
done for different range of Reynolds number and modified Grashof
number. From the experiment, the surface temperature distribution
and the Nusselt number of discrete heat sources have been obtained
and the effects of Reynold number and Richardson number on them
have been discussed. The objective is to find the rate of heat
dissipation from heat sources, by placing them at the bottom position
on a PCB and to compare both modes of cooling of heat sources.
Abstract: In this work, we present a novel active learning approach
for learning a visual object detection system. Our system
is composed of an active learning mechanism as wrapper around
a sub-algorithm which implement an online boosting-based learning
object detector. In the core is a combination of a bootstrap procedure
and a semi automatic learning process based on the online boosting
procedure. The idea is to exploit the availability of classifier during
learning to automatically label training samples and increasingly
improves the classifier. This addresses the issue of reducing labeling
effort meanwhile obtain better performance. In addition, we propose
a verification process for further improvement of the classifier.
The idea is to allow re-update on seen data during learning for
stabilizing the detector. The main contribution of this empirical study
is a demonstration that active learning based on an online boosting
approach trained in this manner can achieve results comparable or
even outperform a framework trained in conventional manner using
much more labeling effort. Empirical experiments on challenging data
set for specific object deteciton problems show the effectiveness of
our approach.
Abstract: Wind catchers are traditional natural ventilation
systems attached to buildings in order to ventilate the indoor air. The
most common type of wind catcher is four sided one which is
capable to catch wind in all directions. CFD simulation is the perfect
way to evaluate the wind catcher performance. The accuracy of CFD
results is the issue of concern, so sensitivity analyses is crucial to
find out the effect of different settings of CFD on results. This paper
presents a series of 3D steady RANS simulations for a generic
isolated four-sided wind catcher attached to a room subjected to wind
direction ranging from 0º to 180º with an interval of 45º. The CFD
simulations are validated with detailed wind tunnel experiments. The
influence of an extensive range of computational parameters is
explored in this paper, including the resolution of the computational
grid, the size of the computational domain and the turbulence model.
This study found that CFD simulation is a reliable method for wind
catcher study, but it is less accurate in prediction of models with non
perpendicular wind directions.
Abstract: In this paper, a two-dimensional mathematical model is developed for estimating the extent of inland inundation due to Indonesian tsunami of 2004 along the coastal belts of Peninsular Malaysia and Thailand. The model consists of the shallow water equations together with open and coastal boundary conditions. In order to route the water wave towards the land, the coastal boundary is treated as a time dependent moving boundary. For computation of tsunami inundation, the initial tsunami wave is generated in the deep ocean with the strength of the Indonesian tsunami of 2004. Several numerical experiments are carried out by changing the slope of the beach to examine the extent of inundation with slope. The simulated inundation is found to decrease with the increase of the slope of the orography. Correlation between inundation / recession and run-up are found to be directly proportional to each other.
Abstract: In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Abstract: The Navier–Stokes equations for unsteady, incompressible, viscous fluids in the axisymmetric coordinate system are solved using a control volume method. The volume-of-fluid (VOF) technique is used to track the free-surface of the liquid. Model predictions are in good agreement with experimental measurements. It is found that the dynamic processes after impact are sensitive to the initial droplet velocity and the liquid pool depth. The time evolution of the crown height and diameter are obtained by numerical simulation. The critical We number for splashing (Wecr) is studied for Oh (Ohnesorge) numbers in the range of 0.01~0.1; the results compares well with those of the experiments.
Abstract: In this paper we propose segmentation approach based
on Vector Quantization technique. Here we have used Kekre-s fast
codebook generation algorithm for segmenting low-altitude aerial
image. This is used as a preprocessing step to form segmented
homogeneous regions. Further to merge adjacent regions color
similarity and volume difference criteria is used. Experiments
performed with real aerial images of varied nature demonstrate that
this approach does not result in over segmentation or under
segmentation. The vector quantization seems to give far better results
as compared to conventional on-the-fly watershed algorithm.
Abstract: Developing techniques for mobile robot navigation constitutes one of the major trends in the current
research on mobile robotics. This paper develops a local
model network (LMN) for mobile robot navigation. The
LMN represents the mobile robot by a set of locally valid
submodels that are Multi-Layer Perceptrons (MLPs).
Training these submodels employs Back Propagation (BP) algorithm. The paper proposes the fuzzy C-means (FCM) in this scheme to divide the input space to sub regions, and then a submodel (MLP) is identified to represent a particular
region. The submodels then are combined in a unified
structure. In run time phase, Radial Basis Functions (RBFs) are employed as windows for the activated submodels. This
proposed structure overcomes the problem of changing operating regions of mobile robots. Read data are used in all experiments. Results for mobile robot navigation using the
proposed LMN reflect the soundness of the proposed
scheme.
Abstract: Biclustering aims at identifying several biclusters that
reveal potential local patterns from a microarray matrix. A bicluster is
a sub-matrix of the microarray consisting of only a subset of genes
co-regulates in a subset of conditions. In this study, we extend the
motif of subspace clustering to present a K-biclusters clustering (KBC)
algorithm for the microarray biclustering issue. Besides minimizing
the dissimilarities between genes and bicluster centers within all
biclusters, the objective function of the KBC algorithm additionally
takes into account how to minimize the residues within all biclusters
based on the mean square residue model. In addition, the objective
function also maximizes the entropy of conditions to stimulate more
conditions to contribute the identification of biclusters. The KBC
algorithm adopts the K-means type clustering process to efficiently
make the partition of K biclusters be optimized. A set of experiments
on a practical microarray dataset are demonstrated to show the
performance of the proposed KBC algorithm.