Abstract: This study investigates the performance of radial basis function networks (RBFN) in forecasting the monthly CO2 emissions of an electric power utility. We also propose a method for input variable selection. This method is based on identifying the general relationships between groups of input candidates and the output. The effect that each input has on the forecasting error is examined by removing all inputs except the variable to be investigated from its group, calculating the networks parameter and performing the forecast. Finally, the new forecasting error is compared with the reference model. Eight input variables were identified as the most relevant, which is significantly less than our reference model with 30 input variables. The simulation results demonstrate that the model with the 8 inputs selected using the method introduced in this study performs as accurate as the reference model, while also being the most parsimonious.
Abstract: This paper summarizes the results of some experiments for finding the effective features for disambiguation of Turkish verbs. Word sense disambiguation is a current area of investigation in which verbs have the dominant role. Generally verbs have more senses than the other types of words in the average and detecting these features for verbs may lead to some improvements for other word types. In this paper we have considered only the syntactical features that can be obtained from the corpus and tested by using some famous machine learning algorithms.
Abstract: This paper deals with the project selection problem. Project selection problem is one of the problems arose firstly in the field of operations research following some production concepts from primary product mix problem. Afterward, introduction of managerial considerations into the project selection problem have emerged qualitative factors and criteria to be regarded as well as quantitative ones. To overcome both kinds of criteria, an analytic network process is developed in this paper enhanced with fuzzy sets theory to tackle the vagueness of experts- comments to evaluate the alternatives. Additionally, a modified version of Least-Square method through a non-linear programming model is augmented to the developed group decision making structure in order to elicit the final weights from comparison matrices. Finally, a case study is considered by which developed structure in this paper is validated. Moreover, a sensitivity analysis is performed to validate the response of the model with respect to the condition alteration.
Abstract: The aim of this paper is to present current and future
procedures in castings procurement. Differences in procurement are
highlighted. The supplier selection criteria used in practice is
compared to literature findings. Different trends related to supply
chains are presented and it is described how they are reflected in
reality to castings procurement. To fulfil the aim, interviews were
conducted in nine companies using castings. It was found that largest
casting users have the most subcontractor foundries and it is more
typical that they have multiple suppliers for the same parts. Currently
only two companies out of nine purchase castings outside Europe,
but the others are also progressing in the same direction. The main
reason is the need to lower purchasing costs. Another trend is that all
companies want to buy cast components or sub-assemblies instead of
raw castings from foundries. It was found that price is a main
supplier selection criterion. All companies use competitive bidding in
supplier selection.
Abstract: Tumor classification is a key area of research in the
field of bioinformatics. Microarray technology is commonly used in
the study of disease diagnosis using gene expression levels. The
main drawback of gene expression data is that it contains thousands
of genes and a very few samples. Feature selection methods are used
to select the informative genes from the microarray. These methods
considerably improve the classification accuracy. In the proposed
method, Genetic Algorithm (GA) is used for effective feature
selection. Informative genes are identified based on the T-Statistics,
Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate
solutions of GA are obtained from top-m informative genes. The
classification accuracy of k-Nearest Neighbor (kNN) method is used
as the fitness function for GA. In this work, kNN and Support Vector
Machine (SVM) are used as the classifiers. The experimental results
show that the proposed work is suitable for effective feature
selection. With the help of the selected genes, GA-kNN method
achieves 100% accuracy in 4 datasets and GA-SVM method
achieves in 5 out of 10 datasets. The GA with kNN and SVM
methods are demonstrated to be an accurate method for microarray
based tumor classification.
Abstract: Quality control is the crucial step for ISO 9001
Quality System Management Standard for companies. While
measuring the quality level of both raw material and semi
product/product, the calibration of the measuring device is an
essential requirement. Calibration suppliers are in the service sector
and therefore the calibration supplier selection is becoming a worthy
topic for improving service quality.
This study presents the results of a questionnaire about the
selection criteria of a calibration supplier. The questionnaire was
applied to 103 companies and the results are discussed in this paper.
The analysis was made with MINITAB 14.0 statistical programs.
“Competence of documentations" and “technical capability" are
defined as the prerequisites because of the ISO/IEC17025:2005
standard. Also “warranties and complaint policy", “communication",
“service features", “quality" and “performance history" are defined as
very important criteria for calibration supplier selection.
Abstract: In this paper, a new adaptive Fourier decomposition
(AFD) based time-frequency speech analysis approach is proposed.
Given the fact that the fundamental frequency of speech signals often
undergo fluctuation, the classical short-time Fourier transform (STFT)
based spectrogram analysis suffers from the difficulty of window size
selection. AFD is a newly developed signal decomposition theory. It is
designed to deal with time-varying non-stationary signals. Its
outstanding characteristic is to provide instantaneous frequency for
each decomposed component, so the time-frequency analysis becomes
easier. Experiments are conducted based on the sample sentence in
TIMIT Acoustic-Phonetic Continuous Speech Corpus. The results
show that the AFD based time-frequency distribution outperforms the
STFT based one.
Abstract: Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Abstract: This paper presents design trade-off and performance impacts of
the amount of pipeline phase of control path signals in a wormhole-switched
network-on-chip (NoC). The numbers of the pipeline phase of the control
path vary between two- and one-cycle pipeline phase. The control paths
consist of the routing request paths for output selection and the arbitration
paths for input selection. Data communications between on-chip routers are
implemented synchronously and for quality of service, the inter-router data
transports are controlled by using a link-level congestion control to avoid
lose of data because of an overflow. The trade-off between the area (logic
cell area) and the performance (bandwidth gain) of two proposed NoC router
microarchitectures are presented in this paper. The performance evaluation is
made by using a traffic scenario with different number of workloads under
2D mesh NoC topology using a static routing algorithm. By using a 130-nm
CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz,
resulting in a high speed network link and high router bandwidth capacity
of about 320 Gbit/s. Based on our experiments, the amount of control path
pipeline stages gives more significant impact on the NoC performance than
the impact on the logic area of the NoC router.
Abstract: In this paper, a pipelined version of genetic algorithm,
called PLGA, and a corresponding hardware platform are described.
The basic operations of conventional GA (CGA) are made pipelined
using an appropriate selection scheme. The selection operator, used
here, is stochastic in nature and is called SA-selection. This helps
maintaining the basic generational nature of the proposed pipelined
GA (PLGA). A number of benchmark problems are used to compare
the performances of conventional roulette-wheel selection and the
SA-selection. These include unimodal and multimodal functions with
dimensionality varying from very small to very large. It is seen that
the SA-selection scheme is giving comparable performances with
respect to the classical roulette-wheel selection scheme, for all the
instances, when quality of solutions and rate of convergence are considered.
The speedups obtained by PLGA for different benchmarks
are found to be significant. It is shown that a complete hardware
pipeline can be developed using the proposed scheme, if parallel
evaluation of the fitness expression is possible. In this connection
a low-cost but very fast hardware evaluation unit is described.
Results of simulation experiments show that in a pipelined hardware
environment, PLGA will be much faster than CGA. In terms of
efficiency, PLGA is found to outperform parallel GA (PGA) also.
Abstract: The quantum mechanics simulation was applied for
calculating the interaction force between 2 molecules based on atomic level. For the simple extractive distillation system, it is ternary
components consisting of 2 closed boiling point components (A,lower boiling point and B, higher boiling point) and solvent (S). The
quantum mechanics simulation was used to calculate the intermolecular force (interaction force) between the closed boiling
point components and solvents consisting of intermolecular between
A-S and B-S.
The requirement of the promising solvent for extractive distillation
is that solvent (S) has to form stronger intermolecular force with only
one component than the other component (A or B). In this study, the
systems of aromatic-aromatic, aromatic-cycloparaffin, and paraffindiolefin
systems were selected as the demonstration for solvent
selection. This study defined new term using for screening the solvents called relative interaction force which is calculated from the
quantum mechanics simulation. The results showed that relative
interaction force gave the good agreement with the literature data
(relative volatilities from the experiment). The reasons are discussed. Finally, this study suggests that quantum mechanics results can improve the relative volatility estimation for screening the solvents leading to reduce time and money consuming
Abstract: The paper presents an investigation in to the effect of neural network predictive control of UPFC on the transient stability performance of a multimachine power system. The proposed controller consists of a neural network model of the test system. This model is used to predict the future control inputs using the damped Gauss-Newton method which employs ‘backtracking’ as the line search method for step selection. The benchmark 2 area, 4 machine system that mimics the behavior of large power systems is taken as the test system for the study and is subjected to three phase short circuit faults at different locations over a wide range of operating conditions. The simulation results clearly establish the robustness of the proposed controller to the fault location, an increase in the critical clearing time for the circuit breakers, and an improved damping of the power oscillations as compared to the conventional PI controller.
Abstract: In this paper we address the issue of classifying the fluorescent intensity of a sample in Indirect Immuno-Fluorescence (IIF). Since IIF is a subjective, semi-quantitative test in its very nature, we discuss a strategy to reliably label the image data set by using the diagnoses performed by different physicians. Then, we discuss image pre-processing, feature extraction and selection. Finally, we propose two ANN-based classifiers that can separate intrinsically dubious samples and whose error tolerance can be flexibly set. Measured performance shows error rates less than 1%, which candidates the method to be used in daily medical practice either to perform pre-selection of cases to be examined, or to act as a second reader.
Abstract: In this paper a novel scheme for watermarking digital
audio during its compression to MPEG-1 Layer III format is
proposed. For this purpose we slightly modify some of the selected
MDCT coefficients, which are used during MPEG audio
compression procedure. Due to the possibility of modifying different
MDCT coefficients, there will be different choices for embedding the
watermark into audio data, considering robustness and transparency
factors. Our proposed method uses a genetic algorithm to select the
best coefficients to embed the watermark. This genetic selection is
done according to the parameters that are extracted from the
perceptual content of the audio to optimize the robustness and
transparency of the watermark. On the other hand the watermark
security is increased due to the random nature of the genetic
selection. The information of the selected MDCT coefficients that
carry the watermark bits, are saves in a database for future extraction
of the watermark. The proposed method is suitable for online MP3
stores to pursue illegal copies of musical artworks. Experimental
results show that the detection ratio of the watermarks at the bitrate
of 128kbps remains above 90% while the inaudibility of the
watermark is preserved.
Abstract: In this paper, we present a novel statistical approach to
corpus-based speech synthesis. Classically, phonetic information is
defined and considered as acoustic reference to be respected. In this
way, many studies were elaborated for acoustical unit classification.
This type of classification allows separating units according to their
symbolic characteristics. Indeed, target cost and concatenation cost
were classically defined for unit selection.
In Corpus-Based Speech Synthesis System, when using large text
corpora, cost functions were limited to a juxtaposition of symbolic
criteria and the acoustic information of units is not exploited in the
definition of the target cost.
In this manuscript, we token in our consideration the unit phonetic
information corresponding to acoustic information. This would be realized
by defining a probabilistic linguistic Bi-grams model basically
used for unit selection. The selected units would be extracted from
the English TIMIT corpora.
Abstract: This paper presents a conceptual model of agreement
options for negotiation support in multi-person decision on
optimizing high-rise building columns. The decision is complicated
since many parties involved in choosing a single alternative from a
set of solutions. There are different concern caused by differing
preferences, experiences, and background. Such building columns as
alternatives are referred to as agreement options which are
determined by identifying the possible decision maker group,
followed by determining the optimal solution for each group. The
group in this paper is based on three-decision makers preferences that
are designer, programmer, and construction manager. Decision
techniques applied to determine the relative value of the alternative
solutions for performing the function. Analytical Hierarchy Process
(AHP) was applied for decision process and game theory based agent
system for coalition formation. An n-person cooperative game is
represented by the set of all players. The proposed coalition
formation model enables each agent to select individually its allies or
coalition. It further emphasizes the importance of performance
evaluation in the design process and value-based decision.
Abstract: Optimizing equipment selection in heavy earthwork
operations is a critical key in the success of any construction project.
The objective of this research incentive was geared towards
developing a computer model to assist contractors and construction
managers in estimating the cost of heavy earthwork operations.
Economical operation analysis was conducted for an equipment fleet
taking into consideration the owning and operating costs involved in
earthwork operations. The model is being developed in a Microsoft
environment and is capable of being integrated with other estimating
and optimization models. In this study, Caterpillar® Performance
Handbook [5] was the main resource used to obtain specifications of
selected equipment. The implementation of the model shall give
optimum selection of equipment fleet not only based on cost
effectiveness but also in terms of versatility. To validate the model, a
case study of an actual dam construction project was selected to
quantify its degree of accuracy.
Abstract: Statistical selection procedures are used to select the
best simulated system from a finite set of alternatives. In this paper,
we present a procedure that can be used to select the best system
when the number of alternatives is large. The proposed procedure
consists a combination between Ranking and Selection, and Ordinal
Optimization procedures. In order to improve the performance of Ordinal
Optimization, Optimal Computing Budget Allocation technique
is used to determine the best simulation lengths for all simulation
systems and to reduce the total computation time. We also argue
the effect of increment in simulation samples for the combined
procedure. The results of numerical illustration show clearly the effect
of increment in simulation samples on the proposed combination of
selection procedure.
Abstract: techniques are examined to overcome the
performance degradation caused by the channel dispersion using
slow frequency hopping (SFH) with dynamic frequency hopping
(DFH) pattern adaptation. In DFH systems, the frequency slots are
selected by continuous quality monitoring of all frequencies available
in a system and modification of hopping patterns for each individual
link based on replacing slots which its signal to interference ratio
(SIR) measurement is below a required threshold. Simulation results
will show the improvements in BER obtained by DFH in comparison
with matched frequency hopping (MFH), random frequency hopping
(RFH) and multi-carrier code division multiple access (MC-CDMA)
in multipath slowly fading dispersive channels using a generalized
bandpass two-path transfer function model, and will show the
improvement obtained according to the threshold selection.
Abstract: In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.