Abstract: Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Abstract: This paper presents a classifier ensemble approach for
predicting the survivability of the breast cancer patients using the
latest database version of the Surveillance, Epidemiology, and End
Results (SEER) Program of the National Cancer Institute. The system
consists of two main components; features selection and classifier
ensemble components. The features selection component divides the
features in SEER database into four groups. After that it tries to find
the most important features among the four groups that maximizes the
weighted average F-score of a certain classification algorithm. The
ensemble component uses three different classifiers, each of which
models different set of features from SEER through the features
selection module. On top of them, another classifier is used to give
the final decision based on the output decisions and confidence
scores from each of the underlying classifiers. Different classification
algorithms have been examined; the best setup found is by using the
decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the
underlying classifiers and Na¨ıve Bayes for the classifier ensemble
step. The system outperforms all published systems to date when
evaluated against the exact same data of SEER (period of 1973-2002).
It gives 87.39% weighted average F-score compared to 85.82% and
81.34% of the other published systems. By increasing the data size to
cover the whole database (period of 1973-2014), the overall weighted
average F-score jumps to 92.4% on the held out unseen test set.
Abstract: This paper presents the data of a series of two-dimensional Discrete Element Method (DEM) simulations of a large-diameter rigid monopile subjected to cyclic loading under a high gravitational force. At present, monopile foundations are widely used to support the tall and heavy wind turbines, which are also subjected to significant from wind and wave actions. A safe design must address issues such as rotations and changes in soil stiffness subject to these loadings conditions. Design guidance on the issue is limited, so are the availability of laboratory and field test data. The interpretation of these results in sand, such as the relation between loading and displacement, relies mainly on empirical correlations to pile properties. Regarding numerical models, most data from Finite Element Method (FEM) can be found. They are not comprehensive, and most of the FEM results are sensitive to input parameters. The micro scale behaviour could change the mechanism of the soil-structure interaction. A DEM model was used in this paper to study the cyclic lateral loads behaviour. A non-dimensional framework is presented and applied to interpret the simulation results. The DEM data compares well with various set of published experimental centrifuge model test data in terms of lateral deflection. The accumulated permanent pile lateral displacements induced by the cyclic lateral loads were found to be dependent on the characteristics of the applied cyclic load, such as the extent of the loading magnitudes and directions.
Abstract: We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.
Abstract: This article developed an ion thruster optic system
sputter erosion depth numerical 3D model by IFE-PIC (Immersed
Finite Element-Particle-in-Cell) and Mont Carlo method, and
calculated the downstream surface sputter erosion rate of accelerator
grid; compared with LIPS-200 life test data. The results of the
numerical model are in reasonable agreement with the measured data.
Finally, we predicted the lifetime of the 20cm diameter ion thruster via
the erosion data obtained with the model. The ultimate result
demonstrated that under normal operating condition, the erosion rate
of the grooves wears on the downstream surface of the accelerator grid
is 34.6μm⁄1000h, which means the conservative lifetime until
structural failure occurring on the accelerator grid is 11500 hours.
Abstract: In this paper genetic based test data compression is
targeted for improving the compression ratio and for reducing the
computation time. The genetic algorithm is based on extended pattern
run-length coding. The test set contains a large number of X value
that can be effectively exploited to improve the test data
compression. In this coding method, a reference pattern is set and its
compatibility is checked. For this process, a genetic algorithm is
proposed to reduce the computation time of encoding algorithm. This
coding technique encodes the 2n compatible pattern or the inversely
compatible pattern into a single test data segment or multiple test data
segment. The experimental result shows that the compression ratio
and computation time is reduced.
Abstract: An analytical 4-DOF nonlinear model of a de Laval
rotor-stator system based on Energy Principles has been used
theoretically and experimentally to investigate fault symptoms in a
rotating system. The faults, namely rotor-stator-rub, crack and
unbalance are modeled as excitations on the rotor shaft. Mayes
steering function is used to simulate the breathing behaviour of the
crack. The fault analysis technique is based on waveform signal,
orbits and Fast Fourier Transform (FFT) derived from simulated and
real measured signals. Simulated and experimental results manifest
considerable mutual resemblance of elliptic-shaped orbits and FFT
for a same range of test data.
Abstract: Structural failure is caused mainly by damage that
often occurs on structures. Many researchers focus on to obtain very
efficient tools to detect the damage in structures in the early state. In
the past decades, a subject that has received considerable attention in
literature is the damage detection as determined by variations in the
dynamic characteristics or response of structures. The study presents
a new damage identification technique. The technique detects the
damage location for the incomplete structure system using output
data only. The method indicates the damage based on the free
vibration test data by using ‘Two Points Condensation (TPC)
technique’. This method creates a set of matrices by reducing the
structural system to two degrees of freedom systems. The current
stiffness matrices obtain from optimization the equation of motion
using the measured test data. The current stiffness matrices compare
with original (undamaged) stiffness matrices. The large percentage
changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply
supported steel beam model structure after inducing thickness change
in one element, where two cases consider. The method detects the
damage and determines its location accurately in both cases. In
addition, the results illustrate these changes in stiffness matrix can be
a useful tool for continuous monitoring of structural safety using
ambient vibration data. Furthermore, its efficiency proves that this
technique can be used also for big structures.
Abstract: In VLSI, testing plays an important role. Major
problem in testing are test data volume and test power. The important
solution to reduce test data volume and test time is test data
compression. The Proposed technique combines the bit maskdictionary
and 2n pattern run length-coding method and provides a
substantial improvement in the compression efficiency without
introducing any additional decompression penalty. This method has
been implemented using Mat lab and HDL Language to reduce test
data volume and memory requirements. This method is applied on
various benchmark test sets and compared the results with other
existing methods. The proposed technique can achieve a compression
ratio up to 86%.
Abstract: This paper presents the results obtained by numerical
simulation using the software ANSYS CFX-CFD for the air
pollutants dispersion in the atmosphere coming from the evacuation
of combustion gases resulting from the fuel combustion in an electric
thermal power plant. The model uses the Navier-Stokes equation to
simulate the dispersion of pollutants in the atmosphere. It is
considered as important factors in elaboration of simulation the
atmospheric conditions (pressure, temperature, wind speed, wind
direction), the exhaust velocity of the combustion gases, chimney
height and the obstacles (buildings). Using the air quality monitoring
stations it is measured the concentrations of main pollutants (SO2,
NOx and PM). The pollutants were monitored over a period of 3
months, after that the average concentration are calculated, which is
used by the software. The concentrations are: 8.915 μg/m3 (NOx),
9.587 μg/m3 (SO2) and 42 μg/m3 (PM). A comparison of test data
with simulation results demonstrated that CFX was able to describe
the dispersion of the pollutant as well the concentration of this
pollutants in the atmosphere.
Abstract: Sound processing is one the subjects that newly
attracts a lot of researchers. It is efficient and usually less expensive
than other methods. In this paper the flow generated sound is used to
estimate the flow speed of free flows. Many sound samples are
gathered. After analyzing the data, a parameter named wave power is
chosen. For all samples the wave power is calculated and averaged
for each flow speed. A curve is fitted to the averaged data and a
correlation between the wave power and flow speed is found. Test
data are used to validate the method and errors for all test data were
under 10 percent. The speed of the flow can be estimated by
calculating the wave power of the flow generated sound and using the
proposed correlation.
Abstract: To construct the lumped spring-mass model
considering the occupants for the offset frontal crash, the SISAME
software and the NHTSA test data were used. The data on 56 kph 40%
offset frontal vehicle to deformable barrier crash test of a MY2007
Mazda 6 4-door sedan were obtained from NHTSA test database. The
overall behaviors of B-pillar and engine of simulation models agreed
very well with the test data. The trends of accelerations at the driver
and passenger head were similar but big differences in peak values.
The differences of peak values caused the large errors of the HIC36
and 3 ms chest g’s. To predict well the behaviors of dummies, the
spring-mass model for the offset frontal crash needs to be improved.
Abstract: An experimental and analytical research on shear
buckling of a comparably large polymer composite I-section is
presented. It is known that shear buckling load of a large span
composite beam is difficult to determine experimentally. In order to
sensitively detect shear buckling of the tested I-section, twenty strain
rosettes and eight displacement sensors were applied and attached on
the web and flange surfaces. The tested specimen was a pultruded
composite beam made of vinylester resin, E-glass, carbon fibers and
micro-fillers. Various coupon tests were performed before the shear
buckling test to obtain fundamental material properties of the Isection.
An asymmetric four-point bending loading scheme was
utilized for the shear test. The loading scheme resulted in a high shear
and almost zero moment condition at the center of the web panel. The
shear buckling load was successfully determined after analyzing the
obtained test data from strain rosettes and displacement sensors. An
analytical approach was also performed to verify the experimental
results and to support the discussed experimental program.
Abstract: The 6th version of Universal modeling method for
centrifugal compressor stage calculation is described. Identification
of the new mathematical model was made. As a result of
identification the uniform set of empirical coefficients is received.
The efficiency definition error is 0,86 % at a design point. The
efficiency definition error at five flow rate points (except a point of
the maximum flow rate) is 1,22 %. Several variants of the stage with
3D impellers designed by 6th version program and quasi threedimensional
calculation programs were compared by their gas
dynamic performances CFD (NUMECA FINE TURBO).
Performance comparison demonstrated general principles of design
validity and leads to some design recommendations.
Abstract: The number of Ground Motion Prediction Equations
(GMPEs) used for predicting peak ground acceleration (PGA) and
the number of earthquake recordings that have been used for fitting
these equations has increased in the past decades. The current PF-L
database contains 3550 recordings. Since the GMPEs frequently
model the peak ground acceleration the goal of the present study was
to refit a selection of 44 of the existing equation models for PGA in
light of the latest data. The algorithm Levenberg-Marquardt was used
for fitting the coefficients of the equations and the results are
evaluated both quantitatively by presenting the root mean squared
error (RMSE) and qualitatively by drawing graphs of the five best
fitted equations. The RMSE was found to be as low as 0.08 for the
best equation models. The newly estimated coefficients vary from the
values published in the original works.
Abstract: The main advantage of multidirectionally reinforced composites is the freedom to orient selected fiber types and hence derives the benefits of varying fibre volume fractions and there by accommodate the design loads of the final structure of composites. This technology provides the means to produce tailored composites with desired properties. Due to the high level of fibre integrity with through thickness reinforcement those composites are expected to exhibit superior load bearing characteristics with capability to carry load even after noticeable and apparent fracture. However, a survey of published literature indicates inadequacy in the design and test data base for the complete characterization of the multidirectional composites. In this paper the research objective is focused on the development and testing of 4-D orthogonal composites with different preform configurations and resin systems. A preform is the skeleton 4D reinforced composite other than the matrix. In 4-D performs fibre bundles are oriented in three directions at 1200 with respect to each other and they are on orthogonal plane with the fibre in 4th direction. This paper addresses the various types of 4-D composite manufacturing processes and the mechanical test methods followed for the material characterization. A composite analysis is also made, experiments on course and fine woven preforms are conducted and the findings of test results are discussed in this paper. The interpretations of the test results reveal several useful and interesting features. This should pave the way for more widespread use of the perform configurations for allied applications.
Abstract: Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.
Abstract: In this research, the capability of neural networks in
modeling and learning complicated and nonlinear relations has been
used to develop a model for the prediction of changes in the diameter
of bubbles in pool boiling distilled water. The input parameters used
in the development of this network include element temperature, heat
flux, and retention time of bubbles. The test data obtained from the
experiment of the pool boiling of distilled water, and the
measurement of the bubbles form on the cylindrical element. The
model was developed based on training algorithm, which is
typologically of back-propagation type. Considering the correlation
coefficient obtained from this model is 0.9633. This shows that this
model can be trusted for the simulation and modeling of the size of
bubble and thermal transfer of boiling.
Abstract: In this research, the changes in bubbles diameter and
number that may occur due to the change in heat flux of pure water
during pool boiling process. For this purpose, test equipment was
designed and developed to collect test data. The bubbles were graded
using Caliper Screen software. To calculate the growth and
nucleation rates of bubbles under different fluxes, population balance
model was employed. The results show that the increase in heat flux
from q=20 kw/m2 to q= 102 kw/m2 raised the growth and nucleation
rates of bubbles.
Abstract: This study evaluates the back calculation of stiffness of a pavement section on Interstate 40 (I-40)in New Mexico through numerical analysis. Falling Weight Deflectometer (FWD) test has been conducted on a section on I-40. Layer stiffness of the pavement has been backcalculated by a backcalculation software, ELMOD, using the FWD test data. Commercial finite element software, ABAQUS, has been used to develop the Finite Element Model (FEM) of this pavement section. Geometry and layer thickness are collected from field coring. Input parameters i.e. stiffnesses of different layers of the pavement are used as the backcalculated ones. Resulting surface deflections at different radial distances from the FEM analysis are compared with field FWD deflection values. It shows close agreement between the FEM and FWD outputs. Therefore, the FWD test method can be considered to be a reliable test procedure for evaluating the in situ stiffness of pavement material.