Abstract: The POD-assisted projective integration method based on the equation-free framework is presented in this paper. The method is essentially based on the slow manifold governing of given system. We have applied two variants which are the “on-line" and “off-line" methods for solving the one-dimensional viscous Bergers- equation. For the on-line method, we have computed the slow manifold by extracting the POD modes and used them on-the-fly along the projective integration process without assuming knowledge of the underlying slow manifold. In contrast, the underlying slow manifold must be computed prior to the projective integration process for the off-line method. The projective step is performed by the forward Euler method. Numerical experiments show that for the case of nonperiodic system, the on-line method is more efficient than the off-line method. Besides, the online approach is more realistic when apply the POD-assisted projective integration method to solve any systems. The critical value of the projective time step which directly limits the efficiency of both methods is also shown.
Abstract: Current image-based individual human recognition
methods, such as fingerprints, face, or iris biometric modalities
generally require a cooperative subject, views from certain aspects,
and physical contact or close proximity. These methods cannot
reliably recognize non-cooperating individuals at a distance in the
real world under changing environmental conditions. Gait, which
concerns recognizing individuals by the way they walk, is a relatively
new biometric without these disadvantages. The inherent gait
characteristic of an individual makes it irreplaceable and useful in
visual surveillance.
In this paper, an efficient gait recognition system for human
identification by extracting two features namely width vector of
the binary silhouette and the MPEG-7-based region-based shape
descriptors is proposed. In the proposed method, foreground objects
i.e., human and other moving objects are extracted by estimating
background information by a Gaussian Mixture Model (GMM) and
subsequently, median filtering operation is performed for removing
noises in the background subtracted image. A moving target classification
algorithm is used to separate human being (i.e., pedestrian)
from other foreground objects (viz., vehicles). Shape and boundary
information is used in the moving target classification algorithm.
Subsequently, width vector of the outer contour of binary silhouette
and the MPEG-7 Angular Radial Transform coefficients are taken as
the feature vector. Next, the Principal Component Analysis (PCA)
is applied to the selected feature vector to reduce its dimensionality.
These extracted feature vectors are used to train an Hidden Markov
Model (HMM) for identification of some individuals. The proposed
system is evaluated using some gait sequences and the experimental
results show the efficacy of the proposed algorithm.
Abstract: This study demonstrates the use of Class F fly ash in
combination with lime or lime kiln dust in the full depth reclamation
(FDR) of asphalt pavements. FDR, in the context of this paper, is a
process of pulverizing a predetermined amount of flexible pavement
that is structurally deficient, blending it with chemical additives and
water, and compacting it in place to construct a new stabilized base
course. Test sections of two structurally deficient asphalt pavements
were reclaimed using Class F fly ash in combination with lime and
lime kiln dust. In addition, control sections were constructed using
cement, cement and emulsion, lime kiln dust and emulsion, and mill
and fill. The service performance and structural behavior of the FDR
pavement test sections were monitored to determine how the fly ash
sections compared to other more traditional pavement rehabilitation
techniques. Service performance and structural behavior were
determined with the use of sensors embedded in the road and Falling
Weight Deflectometer (FWD) tests. Monitoring results of the FWD
tests conducted up to 2 years after reclamation show that the cement,
fly ash+LKD, and fly ash+lime sections exhibited two year resilient
modulus values comparable to open graded cement stabilized
aggregates (more than 750 ksi). The cement treatment resulted in a
significant increase in resilient modulus within 3 weeks of
construction and beyond this curing time, the stiffness increase was
slow. On the other hand, the fly ash+LKD and fly ash+lime test
sections indicated slower shorter-term increase in stiffness. The fly
ash+LKD and fly ash+lime section average resilient modulus values
at two years after construction were in excess of 800 ksi. Additional
longer-term testing data will be available from ongoing pavement
performance and environmental condition data collection at the two
pavement sites.
Abstract: Freeways are originally designed to provide high
mobility to road users. However, the increase in population and
vehicle numbers has led to increasing congestions around the world.
Daily recurrent congestion substantially reduces the freeway capacity
when it is most needed. Building new highways and expanding the
existing ones is an expensive solution and impractical in many
situations. Intelligent and vision-based techniques can, however, be
efficient tools in monitoring highways and increasing the capacity of
the existing infrastructures. The crucial step for highway monitoring
is vehicle detection. In this paper, we propose one of such
techniques. The approach is based on artificial neural networks
(ANN) for vehicles detection and counting. The detection process
uses the freeway video images and starts by automatically extracting
the image background from the successive video frames. Once the
background is identified, subsequent frames are used to detect
moving objects through image subtraction. The result is segmented
using Sobel operator for edge detection. The ANN is, then, used in
the detection and counting phase. Applying this technique to the
busiest freeway in Riyadh (King Fahd Road) achieved higher than
98% detection accuracy despite the light intensity changes, the
occlusion situations, and shadows.
Abstract: This article outlines conceptualization and
implementation of an intelligent system capable of extracting
knowledge from databases. Use of hybridized features of both the
Rough and Fuzzy Set theory render the developed system flexibility
in dealing with discreet as well as continuous datasets. A raw data set
provided to the system, is initially transformed in a computer legible
format followed by pruning of the data set. The refined data set is
then processed through various Rough Set operators which enable
discovery of parameter relationships and interdependencies. The
discovered knowledge is automatically transformed into a rule base
expressed in Fuzzy terms. Two exemplary cancer repository datasets
(for Breast and Lung Cancer) have been used to test and implement
the proposed framework.
Abstract: Camera calibration is an indispensable step for augmented
reality or image guided applications where quantitative information
should be derived from the images. Usually, a camera
calibration is obtained by taking images of a special calibration object
and extracting the image coordinates of projected calibration marks
enabling the calculation of the projection from the 3d world coordinates
to the 2d image coordinates. Thus such a procedure exhibits
typical steps, including feature point localization in the acquired
images, camera model fitting, correction of distortion introduced by
the optics and finally an optimization of the model-s parameters. In
this paper we propose to extend this list by further step concerning
the identification of the optimal subset of images yielding the smallest
overall calibration error. For this, we present a Monte Carlo based
algorithm along with a deterministic extension that automatically
determines the images yielding an optimal calibration. Finally, we
present results proving that the calibration can be significantly
improved by automated image selection.
Abstract: In this work, we improve a previously developed
segmentation scheme aimed at extracting edge information from
speckled images using a maximum likelihood edge detector. The
scheme was based on finding a threshold for the probability density
function of a new kernel defined as the arithmetic mean-to-geometric
mean ratio field over a circular neighborhood set and, in a general
context, is founded on a likelihood random field model (LRFM). The
segmentation algorithm was applied to discriminated speckle areas
obtained using simple elliptic discriminant functions based on
measures of the signal-to-noise ratio with fractional order moments.
A rigorous stochastic analysis was used to derive an exact expression
for the cumulative density function of the probability density
function of the random field. Based on this, an accurate probability
of error was derived and the performance of the scheme was
analysed. The improved segmentation scheme performed well for
both simulated and real images and showed superior results to those
previously obtained using the original LRFM scheme and standard
edge detection methods. In particular, the false alarm probability was
markedly lower than that of the original LRFM method with
oversegmentation artifacts virtually eliminated. The importance of
this work lies in the development of a stochastic-based segmentation,
allowing an accurate quantification of the probability of false
detection. Non visual quantification and misclassification in medical
ultrasound speckled images is relatively new and is of interest to
clinicians.
Abstract: The stochastic nature of tool life using conventional discrete-wear data from experimental tests usually exists due to many individual and interacting parameters. It is a common practice in batch production to continually use the same tool to machine different parts, using disparate machining parameters. In such an environment, the optimal points at which tools have to be changed, while achieving minimum production cost and maximum production rate within the surface roughness specifications, have not been adequately studied. In the current study, two relevant aspects are investigated using coated and uncoated inserts in turning operations: (i) the accuracy of using machinability information, from fixed parameters testing procedures, when variable parameters situations are emerged, and (ii) the credibility of tool life machinability data from prior discrete testing procedures in a non-stop machining. A novel technique is proposed and verified to normalize the conventional fixed parameters machinability data to suit the cases when parameters have to be changed for the same tool. Also, an experimental investigation has been established to evaluate the error in the tool life assessment when machinability from discrete testing procedures is employed in uninterrupted practical machining.
Abstract: In this study, the powders of Ni and Ti with 50.5 at.%
Ni for 12 h were blended and cold pressed at the different pressures
(50, 75 and100 MPa).The porous product obtained after Ni-Ti
compacts were synthesized by SHS (self-propagating hightemperature
synthesis) in the different preheating temperatures (200,
250 and 300oC) and heating rates (30, 60 and 90oC/min). The effects
of the pressure, preheating temperature and heating rate were
investigated on biocompatibility in vivo. The porosity in the
synthesized products was in the range of 50.7–59.7 vol. %. The
pressure, preheating temperature and heating rate were found to have
an important effect on the biocompatibility in-vivo of the synthesized
products. Max. fibrotic tissue within the porous implant was found in
vivo periods (6 months), in which compacting pressure 100MPa.
Abstract: Knowledge Discovery in Databases (KDD) has
evolved into an important and active area of research because of
theoretical challenges and practical applications associated with the
problem of discovering (or extracting) interesting and previously
unknown knowledge from very large real-world databases. Rough
Set Theory (RST) is a mathematical formalism for representing
uncertainty that can be considered an extension of the classical set
theory. It has been used in many different research areas, including
those related to inductive machine learning and reduction of
knowledge in knowledge-based systems. One important concept
related to RST is that of a rough relation. In this paper we presented
the current status of research on applying rough set theory to KDD,
which will be helpful for handle the characteristics of real-world
databases. The main aim is to show how rough set and rough set
analysis can be effectively used to extract knowledge from large
databases.
Abstract: Electrocardiogram (ECG) is considered to be the
backbone of cardiology. ECG is composed of P, QRS & T waves and
information related to cardiac diseases can be extracted from the
intervals and amplitudes of these waves. The first step in extracting
ECG features starts from the accurate detection of R peaks in the
QRS complex. We have developed a robust R wave detector using
wavelets. The wavelets used for detection are Daubechies and
Symmetric. The method does not require any preprocessing therefore,
only needs the ECG correct recordings while implementing the
detection. The database has been collected from MIT-BIH arrhythmia
database and the signals from Lead-II have been analyzed. MatLab
7.0 has been used to develop the algorithm. The ECG signal under
test has been decomposed to the required level using the selected
wavelet and the selection of detail coefficient d4 has been done based
on energy, frequency and cross-correlation analysis of decomposition
structure of ECG signal. The robustness of the method is apparent
from the obtained results.
Abstract: This paper proposed a novel model for short term load
forecast (STLF) in the electricity market. The prior electricity
demand data are treated as time series. The model is composed of
several neural networks whose data are processed using a wavelet
technique. The model is created in the form of a simulation program
written with MATLAB. The load data are treated as time series data.
They are decomposed into several wavelet coefficient series using
the wavelet transform technique known as Non-decimated Wavelet
Transform (NWT). The reason for using this technique is the belief
in the possibility of extracting hidden patterns from the time series
data. The wavelet coefficient series are used to train the neural
networks (NNs) and used as the inputs to the NNs for electricity load
prediction. The Scale Conjugate Gradient (SCG) algorithm is used as
the learning algorithm for the NNs. To get the final forecast data, the
outputs from the NNs are recombined using the same wavelet
technique. The model was evaluated with the electricity load data of
Electronic Engineering Department in Mandalay Technological
University in Myanmar. The simulation results showed that the
model was capable of producing a reasonable forecasting accuracy in
STLF.
Abstract: Deformable active contours are widely used in
computer vision and image processing applications for image
segmentation, especially in biomedical image analysis. The active
contour or “snake" deforms towards a target object by controlling the
internal, image and constraint forces. However, if the contour
initialized with a lesser number of control points, there is a high
probability of surpassing the sharp corners of the object during
deformation of the contour. In this paper, a new technique is
proposed to construct the initial contour by incorporating prior
knowledge of significant corners of the object detected using the
Harris operator. This new reconstructed contour begins to deform, by
attracting the snake towards the targeted object, without missing the
corners. Experimental results with several synthetic images show the
ability of the new technique to deal with sharp corners with a high
accuracy than traditional methods.
Abstract: Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.