Abstract: Process capability index Cpk is the most widely
used index in making managerial decisions since it provides bounds
on the process yield for normally distributed processes. However,
existent methods for assessing process performance which
constructed by statistical inference may unfortunately lead to fine
results, because uncertainties exist in most real-world applications.
Thus, this study adopts fuzzy inference to deal with testing of Cpk .
A brief score is obtained for assessing a supplier’s process instead of
a severe evaluation.
Abstract: Repetitive control and feedback dithering modulation
are applied to a single-phase voltage source inverter, with an aim to
eliminate harmonics and stabilize the inverter under load variations.
The proposed control and modulation scheme comprise multiple loops
of feedback, which helps improve inverter performance and
robustness. Experimental results show that the designed inverter
exhibits very low distortion at its output with THD of about 0.3%
under different load variations.
Abstract: The one-class support vector machine “support vector
data description” (SVDD) is an ideal approach for anomaly or outlier
detection. However, for the applicability of SVDD in real-world
applications, the ease of use is crucial. The results of SVDD are
massively determined by the choice of the regularisation parameter C
and the kernel parameter of the widely used RBF kernel. While for
two-class SVMs the parameters can be tuned using cross-validation
based on the confusion matrix, for a one-class SVM this is not
possible, because only true positives and false negatives can occur
during training. This paper proposes an approach to find the optimal
set of parameters for SVDD solely based on a training set from
one class and without any user parameterisation. Results on artificial
and real data sets are presented, underpinning the usefulness of the
approach.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: The article examines the methods of protection of
citizens' personal data on the Internet using biometric identity
authentication technology. It`s celebrated their potential danger due
to the threat of loss of base biometric templates. To eliminate the
threat of compromised biometric templates is proposed to use neural
networks large and extra-large sizes, which will on the one hand
securely (Highly reliable) to authenticate a person by his biometrics,
and on the other hand make biometrics a person is not available for
observation and understanding. This article also describes in detail
the transformation of personal biometric data access code. It`s formed
the requirements for biometrics converter code for his work with the
images of "Insider," "Stranger", all the "Strangers". It`s analyzed the
effect of the dimension of neural networks on the quality of
converters mystery of biometrics in access code.
Abstract: Although there have been many researches in cluster
analysis to consider on feature weights, little effort is made on sample
weights. Recently, Yu et al. (2011) considered a probability
distribution over a data set to represent its sample weights and then
proposed sample-weighted clustering algorithms. In this paper, we
give a sample-weighted version of generalized fuzzy clustering
regularization (GFCR), called the sample-weighted GFCR
(SW-GFCR). Some experiments are considered. These experimental
results and comparisons demonstrate that the proposed SW-GFCR is
more effective than the most clustering algorithms.
Abstract: Serial hierarchical support vector machine (SHSVM)
is proposed to discriminate three brain tissues which are white matter
(WM), gray matter (GM), and cerebrospinal fluid (CSF). SHSVM
has novel classification approach by repeating the hierarchical
classification on data set iteratively. It used Radial Basis Function
(rbf) Kernel with different tuning to obtain accurate results. Also as
the second approach, segmentation performed with DAGSVM
method. In this article eight univariate features from the raw DTI data
are extracted and all the possible 2D feature sets are examined within
the segmentation process. SHSVM succeed to obtain DSI values
higher than 0.95 accuracy for all the three tissues, which are higher
than DAGSVM results.
Abstract: Distant-talking voice-based HCI system suffers from
performance degradation due to mismatch between the acoustic
speech (runtime) and the acoustic model (training). Mismatch is
caused by the change in the power of the speech signal as observed at
the microphones. This change is greatly influenced by the change in
distance, affecting speech dynamics inside the room before reaching
the microphones. Moreover, as the speech signal is reflected, its
acoustical characteristic is also altered by the room properties. In
general, power mismatch due to distance is a complex problem. This
paper presents a novel approach in dealing with distance-induced
mismatch by intelligently sensing instantaneous voice power variation
and compensating model parameters. First, the distant-talking speech
signal is processed through microphone array processing, and the
corresponding distance information is extracted. Distance-sensitive
Gaussian Mixture Models (GMMs), pre-trained to capture both
speech power and room property are used to predict the optimal
distance of the speech source. Consequently, pre-computed statistic
priors corresponding to the optimal distance is selected to correct
the statistics of the generic model which was frozen during training.
Thus, model combinatorics are post-conditioned to match the power
of instantaneous speech acoustics at runtime. This results to an
improved likelihood in predicting the correct speech command at
farther distances. We experiment using real data recorded inside two
rooms. Experimental evaluation shows voice recognition performance
using our method is more robust to the change in distance compared
to the conventional approach. In our experiment, under the most
acoustically challenging environment (i.e., Room 2: 2.5 meters), our
method achieved 24.2% improvement in recognition performance
against the best-performing conventional method.
Abstract: When it comes to last, it is regarded as the critical
foundation of shoe design and development. A computer aided
methodology for various last form designs is proposed in this study.
The reverse engineering is mainly applied to the process of scanning
for the last form. Then with the minimum energy for revision of
surface continuity, the surface reconstruction of last is rebuilt by the
feature curves of the scanned last. When the surface reconstruction of
last is completed, the weighted arithmetic mean method is applied to
the computation on the shape morphing for the control mesh of last,
thus 3D last form of different sizes is generated from its original form
feature with functions remained. In the end, the result of this study is
applied to an application for 3D last reconstruction system. The
practicability of the proposed methodology is verified through later
case studies.
Abstract: The zero truncated model is usually used in modeling
count data without zero. It is the opposite of zero inflated model.
Zero truncated Poisson and zero truncated negative binomial models
are discussed and used by some researchers in analyzing the
abundance of rare species and hospital stay. Zero truncated models
are used as the base in developing hurdle models. In this study, we
developed a new model, the zero truncated strict arcsine model,
which can be used as an alternative model in modeling count data
without zero and with extra variation. Two simulated and one real
life data sets are used and fitted into this developed model. The
results show that the model provides a good fit to the data. Maximum
likelihood estimation method is used in estimating the parameters.
Abstract: Octree compression techniques have been used
for several years for compressing large three dimensional data
sets into homogeneous regions. This compression technique
is ideally suited to datasets which have similar values in
clusters. Oil engineers represent reservoirs as a three dimensional
grid where hydrocarbons occur naturally in clusters. This
research looks at the efficiency of storing these grids using
octree compression techniques where grid cells are broken
into active and inactive regions. Initial experiments yielded
high compression ratios as only active leaf nodes and their
ancestor, header nodes are stored as a bitstream to file on
disk. Savings in computational time and memory were possible
at decompression, as only active leaf nodes are sent to the
graphics card eliminating the need of reconstructing the original
matrix. This results in a more compact vertex table, which can
be loaded into the graphics card quicker and generating shorter
refresh delay times.
Abstract: This paper is a description approach to predict
incoming and outgoing data rate in network system by using
association rule discover, which is one of the data mining
techniques. Information of incoming and outgoing data in each
times and network bandwidth are network performance
parameters, which needed to solve in the traffic problem. Since
congestion and data loss are important network problems. The result
of this technique can predicted future network traffic. In addition,
this research is useful for network routing selection and network
performance improvement.
Abstract: A computationally simple approach of model order
reduction for single input single output (SISO) and linear timeinvariant
discrete systems modeled in frequency domain is proposed
in this paper. Denominator of the reduced order model is determined
using fuzzy C-means clustering while the numerator parameters are
found by matching time moments and Markov parameters of high
order system.
Abstract: This paper presents a design method of self-tuning
Quantitative Feedback Theory (QFT) by using improved deadbeat
control algorithm. QFT is a technique to achieve robust control with
pre-defined specifications whereas deadbeat is an algorithm that
could bring the output to steady state with minimum step size.
Nevertheless, usually there are large peaks in the deadbeat response.
By integrating QFT specifications into deadbeat algorithm, the large
peaks could be tolerated. On the other hand, emerging QFT with
adaptive element will produce a robust controller with wider
coverage of uncertainty. By combining QFT-based deadbeat
algorithm and adaptive element, superior controller that is called selftuning
QFT-based deadbeat controller could be achieved. The output
response that is fast, robust and adaptive is expected. Using a grain
dryer plant model as a pilot case-study, the performance of the
proposed method has been evaluated and analyzed. Grain drying
process is very complex with highly nonlinear behaviour, long delay,
affected by environmental changes and affected by disturbances.
Performance comparisons have been performed between the
proposed self-tuning QFT-based deadbeat, standard QFT and
standard dead-beat controllers. The efficiency of the self-tuning QFTbased
dead-beat controller has been proven from the tests results in
terms of controller’s parameters are updated online, less percentage
of overshoot and settling time especially when there are variations in
the plant.
Abstract: Currently, web usage make a huge data from a lot of
user attention. In general, proxy server is a system to support web
usage from user and can manage system by using hit rates. This
research tries to improve hit rates in proxy system by applying data
mining technique. The data set are collected from proxy servers in the
university and are investigated relationship based on several features.
The model is used to predict the future access websites. Association
rule technique is applied to get the relation among Date, Time, Main
Group web, Sub Group web, and Domain name for created model.
The results showed that this technique can predict web content for the
next day, moreover the future accesses of websites increased from
38.15% to 85.57 %.
This model can predict web page access which tends to increase
the efficient of proxy servers as a result. In additional, the
performance of internet access will be improved and help to reduce
traffic in networks.
Abstract: Nature conducts its action in a very private manner. To
reveal these actions classical science has done a great effort. But
classical science can experiment only with the things that can be seen
with eyes. Beyond the scope of classical science quantum science
works very well. It is based on some postulates like qubit,
superposition of two states, entanglement, measurement and
evolution of states that are briefly described in the present paper.
One of the applications of quantum computing i.e.
implementation of a novel quantum evolutionary algorithm(QEA) to
automate the time tabling problem of Dayalbagh Educational Institute
(Deemed University) is also presented in this paper. Making a good
timetable is a scheduling problem. It is NP-hard, multi-constrained,
complex and a combinatorial optimization problem. The solution of
this problem cannot be obtained in polynomial time. The QEA uses
genetic operators on the Q-bit as well as updating operator of
quantum gate which is introduced as a variation operator to converge
toward better solutions.
Abstract: By employing BS (Base Station) cooperation we can
increase substantially the spectral efficiency and capacity of cellular
systems. The signals received at each BS are sent to a central unit that
performs the separation of the different MT (Mobile Terminal) using
the same physical channel. However, we need accurate sampling and
quantization of those signals so as to reduce the backhaul
communication requirements.
In this paper we consider the optimization of the quantizers for BS
cooperation systems. Four different quantizer types are analyzed and
optimized to allow better SQNR (Signal-to-Quantization Noise
Ratio) and BER (Bit Error Rate) performance.