Abstract: Patients with diabetes are susceptible to chronic foot
wounds which may be difficult to manage and slow to heal.
Diagnosis and treatment currently rely on the subjective judgement of
experienced professionals. An objective method of tissue assessment
is required. In this paper, a data fusion approach was taken to wound
tissue classification. The supervised Maximum Likelihood and
unsupervised Multi-Modal Expectation Maximisation algorithms
were used to classify tissues within simulated wound models by
weighting the contributions of both colour and 3D depth information.
It was found that, at low weightings, depth information could show
significant improvements in classification accuracy when compared
to classification by colour alone, particularly when using the
maximum likelihood method. However, larger weightings were
found to have an entirely negative effect on accuracy.
Abstract: Series of experimental tests were conducted on a
section of a 660 kW wind turbine blade to measure the pressure
distribution of this model oscillating in plunging motion. In order to
minimize the amount of data required to predict aerodynamic loads
of the airfoil, a General Regression Neural Network, GRNN, was
trained using the measured experimental data. The network once
proved to be accurate enough, was used to predict the flow behavior
of the airfoil for the desired conditions.
Results showed that with using a few of the acquired data, the
trained neural network was able to predict accurate results with
minimal errors when compared with the corresponding measured
values. Therefore with employing this trained network the
aerodynamic coefficients of the plunging airfoil, are predicted
accurately at different oscillation frequencies, amplitudes, and angles
of attack; hence reducing the cost of tests while achieving acceptable
accuracy.
Abstract: Classification is an important topic in machine learning
and bioinformatics. Many datasets have been introduced for
classification tasks. A dataset contains multiple features, and the quality of features influences the classification accuracy of the dataset.
The power of classification for each feature differs. In this study, we
suggest the Classification Influence Index (CII) as an indicator of classification power for each feature. CII enables evaluation of the
features in a dataset and improved classification accuracy by transformation of the dataset. By conducting experiments using CII
and the k-nearest neighbor classifier to analyze real datasets, we confirmed that the proposed index provided meaningful improvement
of the classification accuracy.
Abstract: This paper deals with a high-order accurate Runge
Kutta Discontinuous Galerkin (RKDG) method for the numerical
solution of the wave equation, which is one of the simple case of a
linear hyperbolic partial differential equation. Nodal DG method is
used for a finite element space discretization in 'x' by discontinuous
approximations. This method combines mainly two key ideas which
are based on the finite volume and finite element methods. The
physics of wave propagation being accounted for by means of
Riemann problems and accuracy is obtained by means of high-order
polynomial approximations within the elements. High order accurate
Low Storage Explicit Runge Kutta (LSERK) method is used for
temporal discretization in 't' that allows the method to be nonlinearly
stable regardless of its accuracy. The resulting RKDG
methods are stable and high-order accurate. The L1 ,L2 and L∞ error
norm analysis shows that the scheme is highly accurate and effective.
Hence, the method is well suited to achieve high order accurate
solution for the scalar wave equation and other hyperbolic equations.
Abstract: Ice cover County has a significant impact on rivers as it affects with the ice melting capacity which results in flooding, restrict navigation, modify the ecosystem and microclimate. River ices are made up of different ice types with varying ice thickness, so surveillance of river ice plays an important role. River ice types are captured using infrared imaging camera which captures the images even during the night times. In this paper the river ice infrared texture images are analysed using first-order statistical methods and secondorder statistical methods. The second order statistical methods considered are spatial gray level dependence method, gray level run length method and gray level difference method. The performance of the feature extraction methods are evaluated by using Probabilistic Neural Network classifier and it is found that the first-order statistical method and second-order statistical method yields low accuracy. So the features extracted from the first-order statistical method and second-order statistical method are combined and it is observed that the result of these combined features (First order statistical method + gray level run length method) provides higher accuracy when compared with the features from the first-order statistical method and second-order statistical method alone.
Abstract: Today technological process makes possible surface
control of producing parts which is needful for product quality
guarantee. Geometrical structure of part surface includes form,
proportion, accuracy to shape, accuracy to size, alignment and
surface topography (roughness, waviness, etc.). All these parameters
are dependence at technology, production machine parameters,
material properties, but also at human, etc. Every parameters
approves at total part accuracy, it is means at accuracy to shape. One
of the most important accuracy to shape element is roundness. This
paper will be deals by comparison of roughness deviations at
coordination measuring machines and at special single purpose
machines. Will describing measuring by discreet method
(discontinuous) and scanning method (continuous) at coordination
measuring machines and confrontation with reference method using
at single purpose machines.
Abstract: Iris-based biometric authentication is gaining importance
in recent times. Iris biometric processing however, is a complex
process and computationally very expensive. In the overall processing
of iris biometric in an iris-based biometric authentication system,
feature processing is an important task. In feature processing, we extract
iris features, which are ultimately used in matching. Since there
is a large number of iris features and computational time increases
as the number of features increases, it is therefore a challenge to
develop an iris processing system with as few as possible number of
features and at the same time without compromising the correctness.
In this paper, we address this issue and present an approach to feature
extraction and feature matching process. We apply Daubechies D4
wavelet with 4 levels to extract features from iris images. These
features are encoded with 2 bits by quantizing into 4 quantization
levels. With our proposed approach it is possible to represent an
iris template with only 304 bits, whereas existing approaches require
as many as 1024 bits. In addition, we assign different weights to
different iris region to compare two iris templates which significantly
increases the accuracy. Further, we match the iris template based on
a weighted similarity measure. Experimental results on several iris
databases substantiate the efficacy of our approach.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: The hidden-point bar method is useful in many
surveying applications. The method involves determining the
coordinates of a hidden point as a function of horizontal and vertical
angles measured to three fixed points on the bar. Using these
measurements, the procedure involves calculating the slant angles,
the distances from the station to the fixed points, the coordinates of
the fixed points, and then the coordinates of the hidden point. The
propagation of the measurement errors in this complex process has
not been fully investigated in the literature. This paper evaluates the
effect of the bar geometry on the position accuracy of the hidden
point which depends on the measurement errors of the horizontal and
vertical angles. The results are used to establish some guidelines
regarding the inclination angle of the bar and the location of the
observed points that provide the best accuracy.
Abstract: Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.
Abstract: In this research the Preparation of Land use map of
scanner LISS III satellite data, belonging to the IRS in the Aghche
region in Isfahan province, is studied carefully. For this purpose, the
IRS satellite images of August 2008 and various land preparation
uses in region including rangelands, irrigation farming, dry farming,
gardens and urban areas were separated and identified. Therefore, the
GPS and Erdas Imaging software were used and three methods of
Maximum Likelihood, Mahalanobis Distance and Minimum Distance
were analyzed. In each of these methods, matrix error and Kappa
index were calculated and accuracy of each method, based on
percentages: 53.13, 56.64 and 48.44, were obtained respectively.
Considering the low accuracy of these methods in separation of land
preparation use, the visual interpretation of the map was used.
Finally, regional visits of 150 points were noted at random and no
error was observed. It shows that the map prepared by visual
interpretation is in high accuracy. Although the probable errors due
to visual interpretation and geometric correction might happen but
the desired accuracy of the map which is more than 85 percent is
reliable.
Abstract: In this paper, He-s amplitude frequency formulation is used to obtain a periodic solution for a nonlinear oscillator with fractional potential. By calculation and computer simulations, compared with the exact solution shows that the result obtained is of high accuracy.
Abstract: Facial expression analysis plays a significant role for
human computer interaction. Automatic analysis of human facial
expression is still a challenging problem with many applications. In
this paper, we propose neuro-fuzzy based automatic facial expression
recognition system to recognize the human facial expressions like
happy, fear, sad, angry, disgust and surprise. Initially facial image is
segmented into three regions from which the uniform Local Binary
Pattern (LBP) texture features distributions are extracted and
represented as a histogram descriptor. The facial expressions are
recognized using Multiple Adaptive Neuro Fuzzy Inference System
(MANFIS). The proposed system designed and tested with JAFFE
face database. The proposed model reports 94.29% of classification
accuracy.
Abstract: This work deals with aspects of support vector learning for large-scale data mining tasks. Based on a decomposition algorithm that can be run in serial and parallel mode we introduce a data transformation that allows for the usage of an expensive generalized kernel without additional costs. In order to speed up the decomposition algorithm we analyze the problem of working set selection for large data sets and analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our modifications and settings lead to improvement of support vector learning performance and thus allow using extensive parameter search methods to optimize classification accuracy.
Abstract: The absolute Cu atoms density in Cu(2S1/22P1/2)
ground state has been measured by Resonance Optical Absorption
(ROA) technique in a DC magnetron sputtering deposition with
argon. We measured these densities under variety of operation
conditions: pressure from 0.6 μbar to 14 μbar, input power from
10W to 200W and N2 mixture from 0% to 100%. For measuring the
gas temperature, we used the simulation of N2 rotational spectra
with a special computer code. The absolute number density of Cu
atoms decreases with increasing the N2 percentage of buffer gas at
any conditions of this work. But the deposition rate, is not decreased
with the same manner. The deposition rate variation is very small
and in the limit of quartz balance measuring equipment accuracy. So
we conclude that decrease in the absolute number density of Cu
atoms in magnetron plasma has not a big effect on deposition rate,
because the diffusion of Cu atoms to the chamber volume and
deviation of Cu atoms from direct path (towards the substrate)
decreases with increasing of N2 percentage of buffer gas. This is
because of the lower mass of N2 atoms compared to the argon ones.
Abstract: In this paper, applying He-s energy balance method to determine frequency formulation relations of nonlinear oscillators with discontinuous term or fractional potential. By calculation and computer simulations, compared with the exact solutions show that the results obtained are of high accuracy.
Abstract: For investigations of electromagnetic field
distributions in biological structures by Finite Element Method
(FEM), a method for automatic 3D model building of human
anatomical objects is developed. Models are made by meshed
structures and specific electromagnetic material properties for each
tissue type. Mesh is built according to specific FEM criteria for
achieving good solution accuracy. Several FEM models of
anatomical objects are built. Formulation using magnetic vector
potential and scalar electric potential (A-V, A) is used for modeling
of electromagnetic fields in human tissue objects. The developed
models are suitable for investigations of electromagnetic field
distributions in human tissues exposed in external fields during
magnetic stimulation, defibrillation, impedance tomography etc.
Abstract: In this paper, a system level behavioural model for RF
power amplifier, which exhibits memory effects, and based on multibranch
system is proposed. When higher order terms are included,
the memory polynomial model (MPM) exhibits numerical
instabilities. A set of memory orthogonal polynomial model
(OMPM) is introduced to alleviate the numerical instability problem
associated to MPM model. A data scaling and centring algorithm was
applied to improve the power amplifier modeling accuracy.
Simulation results prove that the numerical instability can be greatly
reduced, as well as the model precision improved with nonlinear
model.
Abstract: Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.