Abstract: Displacement measurement was conducted on compact normal and shear specimens made of acrylic homogeneous material subjected to mixed-mode loading by digital image correlation. The intelligent hybrid method proposed by Nishioka et al. was applied to the stress-strain analysis near the crack tip. The accuracy of stress-intensity factor at the free surface was discussed from the viewpoint of both the experiment and 3-D finite element analysis. The surface images before and after deformation were taken by a CMOS camera, and we developed the system which enabled the real time stress analysis based on digital image correlation and inverse problem analysis. The great portion of processing time of this system was spent on displacement analysis. Then, we tried improvement in speed of this portion. In the case of cracked body, it is also possible to evaluate fracture mechanics parameters such as the J integral, the strain energy release rate, and the stress-intensity factor of mixed-mode. The 9-points elliptic paraboloid approximation could not analyze the displacement of submicron order with high accuracy. The analysis accuracy of displacement was improved considerably by introducing the Newton-Raphson method in consideration of deformation of a subset. The stress-intensity factor was evaluated with high accuracy of less than 1% of the error.
Abstract: Global approximation using metamodel for complex
mathematical function or computer model over a large variable
domain is often needed in sensibility analysis, computer simulation,
optimal control, and global design optimization of complex, multiphysics
systems. To overcome the limitations of the existing
response surface (RS), surrogate or metamodel modeling methods for
complex models over large variable domain, a new adaptive and
regressive RS modeling method using quadratic functions and local
area model improvement schemes is introduced. The method applies
an iterative and Latin hypercube sampling based RS update process,
divides the entire domain of design variables into multiple cells,
identifies rougher cells with large modeling error, and further divides
these cells along the roughest dimension direction. A small number
of additional sampling points from the original, expensive model are
added over the small and isolated rough cells to improve the RS
model locally until the model accuracy criteria are satisfied. The
method then combines local RS cells to regenerate the global RS
model with satisfactory accuracy. An effective RS cells sorting
algorithm is also introduced to improve the efficiency of model
evaluation. Benchmark tests are presented and use of the new
metamodeling method to replace complex hybrid electrical vehicle
powertrain performance model in vehicle design optimization and
optimal control are discussed.
Abstract: Artifact is one of the most important factors in
degrading the CT image quality and plays an important role in
diagnostic accuracy. In this paper, some artifacts typically appear in
Spiral CT are introduced. The different factors such as patient,
equipment and interpolation algorithm which cause the artifacts are
discussed and new developments and image processing algorithms to
prevent or reduce them are presented.
Abstract: Among the various cooling processes in industrial
applications such as: electronic devices, heat exchangers, gas
turbines, etc. Gas turbine blades cooling is the most challenging one.
One of the most common practices is using ribbed wall because of
the boundary layer excitation and therefore making the ultimate
cooling. Vortex formation between rib and channel wall will result in
a complicated behavior of flow regime. At the other hand, selecting
the most efficient method for capturing the best results comparing to
experimental works would be a fascinating issue. In this paper 4
common methods in turbulence modeling: standard k-e, rationalized
k-e with enhanced wall boundary layer treatment, k-w and RSM
(Reynolds stress model) are employed to a square ribbed channel to
investigate the separation and thermal behavior of the flow in the
channel. Finally all results from different methods which are used in
this paper will be compared with experimental data available in
literature to ensure the numerical method accuracy.
Abstract: The spectral action balance equation is an equation that
used to simulate short-crested wind-generated waves in shallow water
areas such as coastal regions and inland waters. This equation consists
of two spatial dimensions, wave direction, and wave frequency which
can be solved by finite difference method. When this equation with
dominating convection term are discretized using central differences,
stability problems occur when the grid spacing is chosen too coarse.
In this paper, we introduce the splitting upwind schemes for avoiding
stability problems and prove that it is consistent to the upwind scheme
with same accuracy. The splitting upwind schemes was adopted
to split the wave spectral action balance equation into four onedimensional
problems, which for each small problem obtains the
independently tridiagonal linear systems. For each smaller system
can be solved by direct or iterative methods at the same time which
is very fast when performed by a multi-processor computer.
Abstract: A lot of matching algorithms with different characteristics have been introduced in recent years. For real time systems these algorithms are usually based on minutiae features. In this paper we introduce a novel approach for feature extraction in which the extracted features are independent of shift and rotation of the fingerprint and at the meantime the matching operation is performed much more easily and with higher speed and accuracy. In this new approach first for any fingerprint a reference point and a reference orientation is determined and then based on this information features are converted into polar coordinates. Due to high speed and accuracy of this approach and small volume of extracted features and easily execution of matching operation this approach is the most appropriate for real time applications.
Abstract: The prediction of transmembrane helical segments
(TMHs) in membrane proteins is an important field in the
bioinformatics research. In this paper, a method based on discrete
wavelet transform (DWT) has been developed to predict the number
and location of TMHs in membrane proteins. PDB coded as 1F88 was
chosen as an example to describe the prediction of the number and
location of TMHs in membrane proteins by using this method. One
group of test data sets that contain total 19 protein sequences was
utilized to access the effect of this method. Compared with the
prediction results of DAS, PRED-TMR2, SOSUI, HMMTOP2.0 and
TMHMM2.0, the obtained results indicate that the presented method
has higher prediction accuracy.
Abstract: Word sense disambiguation is one of the most important open problems in natural language processing applications such as information retrieval and machine translation. Many approach strategies can be employed to resolve word ambiguity with a reasonable degree of accuracy. These strategies are: knowledgebased, corpus-based, and hybrid-based. This paper pays attention to the corpus-based strategy that employs an unsupervised learning method for disambiguation. We report our investigation of Latent Semantic Indexing (LSI), an information retrieval technique and unsupervised learning, to the task of Thai noun and verbal word sense disambiguation. The Latent Semantic Indexing has been shown to be efficient and effective for Information Retrieval. For the purposes of this research, we report experiments on two Thai polysemous words, namely /hua4/ and /kep1/ that are used as a representative of Thai nouns and verbs respectively. The results of these experiments demonstrate the effectiveness and indicate the potential of applying vector-based distributional information measures to semantic disambiguation.
Abstract: In this paper, we propose disease diagnosis hardware
architecture by using Hypernetworks technique. It can be used to
diagnose 3 different diseases (SPECT Heart, Leukemia, Prostate
cancer). Generally, the disparate diseases require specified diagnosis
hardware model for each disease. Using similarities of three diseases
diagnosis processor, we design diagnosis processor that can diagnose
three different diseases. Our proposed architecture that is combining
three processors to one processor can reduce hardware size without
decrease of the accuracy.
Abstract: The river flow forecasting represents a crucial point to employ for improving a management policy addressed to the right use of water resources as well as for conjugating prevention and defense actions against environmental degradation. The difficulties occurring during the field activities encourage the development and implementation of operative computation and measuring methods addressed to time reduction for data acquisition and processing maintaining a good level of accuracy. Therefore, the aim of the present work is to test a new entropy based expeditive methodology for the evaluation of the rating curves on three gauged sections with different geometric and morphological characteristics. The methodology requires the choice of only three verticals along the measure section and the sampling of only the maximum velocity. The results underline how in most conditions the rating curves drawn can replace those built with classic methodologies, simplifying thus the procedures of data monitoring and calculation.
Abstract: This paper presents a new method for estimating the mean curve of impulse voltage waveforms that are recorded during impulse tests. In practice, these waveforms are distorted by noise, oscillations and overshoot. The problem is formulated as an estimation problem. Estimation of the current signal parameters is achieved using a fast and accurate technique. The method is based on discrete dynamic filtering algorithm (DDF). The main advantage of the proposed technique is its ability in producing the estimates in a very short time and at a very high degree of accuracy. The algorithm uses sets of digital samples of the recorded impulse waveform. The proposed technique has been tested using simulated data of practical waveforms. Effects of number of samples and data window size are studied. Results are reported and discussed.
Abstract: This paper presents a controller design technique for
Synchronous Reluctance Motor to improve its dynamic performance
with fast response and high accuracy. The sliding mode control is the
most attractive and suitable method to use for this purpose, since it is
simple in design and for its insensitivity to parameter variations or
external disturbances. When this method implemented it yields fast
dynamic response without overshoot and a zero steady-state error.
The current loop control with decentralized sliding mode is presented
in this paper. The mathematical model for the synchronous machine,
the inverter and the controller is developed. The stability of the
sliding mode controller is analyzed. Simulation of synchronous
reluctance motor and the controller with PWM-inverter has been
curried out, using the SIMULINK software package of MATLAB.
Simulation results are presented to show the effectiveness of the
approach.
Abstract: In the last few years, several steps were taken in order
to improve the quality of corporate governance for Romanian listed
companies. Higher standards of corporate governance is documented
in the literature to lead to a better information environment, and,
consequently, to increase analysts forecast accuracy. Accordingly, the
purpose of this paper is to investigate the extent to which corporate
governance policies affect analysts forecasts for companies listed on
Bucharest Stock Exchange. The results showed that there is indeed a
negative correlation between a corporate governance index – used as
a proxy for the quality of corporate governance practices - and
analysts forecast errors.
Abstract: Wireless Sensor Networks can be used to monitor the
physical phenomenon in such areas where human approach is nearly
impossible. Hence the limited power supply is the major constraint of
the WSNs due to the use of non-rechargeable batteries in sensor
nodes. A lot of researches are going on to reduce the energy
consumption of sensor nodes. Energy map can be used with
clustering, data dissemination and routing techniques to reduce the
power consumption of WSNs. Energy map can also be used to know
which part of the network is going to fail in near future. In this paper,
Energy map is constructed using the prediction based approach.
Adaptive alpha GM(1,1) model is used as the prediction model.
GM(1,1) is being used worldwide in many applications for predicting
future values of time series using some past values due to its high
computational efficiency and accuracy.
Abstract: Tumor classification is a key area of research in the
field of bioinformatics. Microarray technology is commonly used in
the study of disease diagnosis using gene expression levels. The
main drawback of gene expression data is that it contains thousands
of genes and a very few samples. Feature selection methods are used
to select the informative genes from the microarray. These methods
considerably improve the classification accuracy. In the proposed
method, Genetic Algorithm (GA) is used for effective feature
selection. Informative genes are identified based on the T-Statistics,
Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate
solutions of GA are obtained from top-m informative genes. The
classification accuracy of k-Nearest Neighbor (kNN) method is used
as the fitness function for GA. In this work, kNN and Support Vector
Machine (SVM) are used as the classifiers. The experimental results
show that the proposed work is suitable for effective feature
selection. With the help of the selected genes, GA-kNN method
achieves 100% accuracy in 4 datasets and GA-SVM method
achieves in 5 out of 10 datasets. The GA with kNN and SVM
methods are demonstrated to be an accurate method for microarray
based tumor classification.
Abstract: Proteomics is one of the largest areas of research for
bioinformatics and medical science. An ambitious goal of proteomics
is to elucidate the structure, interactions and functions of all proteins
within cells and organisms. Predicting Protein-Protein Interaction
(PPI) is one of the crucial and decisive problems in current research.
Genomic data offer a great opportunity and at the same time a lot of
challenges for the identification of these interactions. Many methods
have already been proposed in this regard. In case of in-silico
identification, most of the methods require both positive and negative
examples of protein interaction and the perfection of these examples
are very much crucial for the final prediction accuracy. Positive
examples are relatively easy to obtain from well known databases. But
the generation of negative examples is not a trivial task. Current PPI
identification methods generate negative examples based on some
assumptions, which are likely to affect their prediction accuracy.
Hence, if more reliable negative examples are used, the PPI prediction
methods may achieve even more accuracy. Focusing on this issue, a
graph based negative example generation method is proposed, which
is simple and more accurate than the existing approaches. An
interaction graph of the protein sequences is created. The basic
assumption is that the longer the shortest path between two
protein-sequences in the interaction graph, the less is the possibility of
their interaction. A well established PPI detection algorithm is
employed with our negative examples and in most cases it increases
the accuracy more than 10% in comparison with the negative pair
selection method in that paper.
Abstract: The equivalence class subset algorithm is a powerful
tool for solving a wide variety of constraint satisfaction problems and
is based on the use of a decision function which has a very high but
not perfect accuracy. Perfect accuracy is not required in the decision
function as even a suboptimal solution contains valuable information
that can be used to help find an optimal solution. In the hardest
problems, the decision function can break down leading to a
suboptimal solution where there are more equivalence classes than
are necessary and which can be viewed as a mixture of good decision
and bad decisions. By choosing a subset of the decisions made in
reaching a suboptimal solution an iterative technique can lead to an
optimal solution, using series of steadily improved suboptimal
solutions. The goal is to reach an optimal solution as quickly as
possible. Various techniques for choosing the decision subset are
evaluated.
Abstract: This work deals with aspects of support vector machine learning for large-scale data mining tasks. Based on a decomposition algorithm for support vector machine training that can be run in serial as well as shared memory parallel mode we introduce a transformation of the training data that allows for the usage of an expensive generalized kernel without additional costs. We present experiments for the Gaussian kernel, but usage of other kernel functions is possible, too. In order to further speed up the decomposition algorithm we analyze the critical problem of working set selection for large training data sets. In addition, we analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our tests and conclusions led to several modifications of the algorithm and the improvement of overall support vector machine learning performance. Our method allows for using extensive parameter search methods to optimize classification accuracy.
Abstract: In this paper we propose new method for
simultaneous generating multiple quantiles corresponding to given
probability levels from data streams and massive data sets. This
method provides a basis for development of single-pass low-storage
quantile estimation algorithms, which differ in complexity, storage
requirement and accuracy. We demonstrate that such algorithms may
perform well even for heavy-tailed data.
Abstract: This paper presents an alternate approach that uses
artificial neural network to simulate the flood level dynamics in a
river basin. The algorithm was developed in a decision support
system environment in order to enable users to process the data. The
decision support system is found to be useful due to its interactive
nature, flexibility in approach and evolving graphical feature and can
be adopted for any similar situation to predict the flood level. The
main data processing includes the gauging station selection, input
generation, lead-time selection/generation, and length of prediction.
This program enables users to process the flood level data, to
train/test the model using various inputs and to visualize results. The
program code consists of a set of files, which can as well be modified
to match other purposes. This program may also serve as a tool for
real-time flood monitoring and process control. The running results
indicate that the decision support system applied to the flood level
seems to have reached encouraging results for the river basin under
examination. The comparison of the model predictions with the
observed data was satisfactory, where the model is able to forecast
the flood level up to 5 hours in advance with reasonable prediction
accuracy. Finally, this program may also serve as a tool for real-time
flood monitoring and process control.