Abstract: Modern spatial database management systems require a unique Spatial Access Method (SAM) in order solve complex spatial quires efficiently. In this case the spatial data structure takes a prominent place in the SAM. Inadequate data structure leads forming poor algorithmic choices and forging deficient understandings of algorithm behavior on the spatial database. A key step in developing a better semantic spatial object data structure is to quantify the performance effects of semantic and outlier detections that are not reflected in the previous tree structures (R-Tree and its variants). This paper explores a novel SSRO-Tree on SAM to the Topo-Semantic approach. The paper shows how to identify and handle the semantic spatial objects with outlier objects during page overflow/underflow, using gain/loss metrics. We introduce a new SSRO-Tree algorithm which facilitates the achievement of better performance in practice over algorithms that are superior in the R*-Tree and RO-Tree by considering selection queries.
Abstract: Simultaneous determination of multicomponents of phenol, resorcinol and catechol with a chemometric technique a PCranking artificial neural network (PCranking-ANN) algorithm is reported in this study. Based on the data correlation coefficient method, 3 representative PCs are selected from the scores of original UV spectral data (35 PCs) as the original input patterns for ANN to build a neural network model. The results obtained by iterating 8000 .The RMSEP for phenol, resorcinol and catechol with PCranking- ANN were 0.6680, 0.0766 and 0.1033, respectively. Calibration matrices were 0.50-21.0, 0.50-15.1 and 0.50-20.0 μg ml-1 for phenol, resorcinol and catechol, respectively. The proposed method was successfully applied for the determination of phenol, resorcinol and catechol in synthetic and water samples.
Abstract: As the network based technologies become
omnipresent, demands to secure networks/systems against threat
increase. One of the effective ways to achieve higher security is
through the use of intrusion detection systems (IDS), which are a
software tool to detect anomalous in the computer or network. In this
paper, an IDS has been developed using an improved machine
learning based algorithm, Locally Linear Neuro Fuzzy Model
(LLNF) for classification whereas this model is originally used for
system identification. A key technical challenge in IDS and LLNF
learning is the curse of high dimensionality. Therefore a feature
selection phase is proposed which is applicable to any IDS. While
investigating the use of three feature selection algorithms, in this
model, it is shown that adding feature selection phase reduces
computational complexity of our model. Feature selection algorithms
require the use of a feature goodness measure. The use of both a
linear and a non-linear measure - linear correlation coefficient and
mutual information- is investigated respectively
Abstract: In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.
Abstract: As open innovation has received increasingly attention
in the management of innovation, the importance of identifying
potential partnership is increasing. This paper suggests a methodology
to identify the interested parties as one of Innovation intermediaries to
enable open innovation with patent network. To implement the
methodology, multi-stage patent citation analysis such as
bibliographic coupling and information visualization method such as
keyword vector mapping are utilized. This paper has contribution in
that it can present meaningful collaboration keywords to identified
potential partners in network since not only citation information but
also patent textual information is used.
Abstract: This work presents the design of an expert system that aims in the procurement of patient medial background and in the search for suitable skin test selections. Skin testing is the tool used most widely to diagnose allergies. The language of expert systems CLIPS is used as a tool of designing. Finally, we present the evaluation of the proposed expert system which was achieved with the import of certain medical cases and the system produced with suitable successful skin tests.
Abstract: Quality control is the crucial step for ISO 9001
Quality System Management Standard for companies. While
measuring the quality level of both raw material and semi
product/product, the calibration of the measuring device is an
essential requirement. Calibration suppliers are in the service sector
and therefore the calibration supplier selection is becoming a worthy
topic for improving service quality.
This study presents the results of a questionnaire about the
selection criteria of a calibration supplier. The questionnaire was
applied to 103 companies and the results are discussed in this paper.
The analysis was made with MINITAB 14.0 statistical programs.
“Competence of documentations" and “technical capability" are
defined as the prerequisites because of the ISO/IEC17025:2005
standard. Also “warranties and complaint policy", “communication",
“service features", “quality" and “performance history" are defined as
very important criteria for calibration supplier selection.
Abstract: A simple analytical model has been developed to
optimize biasing conditions for obtaining maximum linearity among
lattice-matched, pseudomorphic and metamorphic HEMT types as
well as enhancement and depletion HEMT modes. A nonlinear
current-voltage model has been simulated based on extracted data to
study and select the most appropriate type and mode of HEMT in
terms of a given gate-source biasing voltage within the device so as
to employ the circuit for the highest possible output current or
voltage linear swing. Simulation results can be used as a basis for the
selection of optimum gate-source biasing voltage for a given type
and mode of HEMT with regard to a circuit design. The
consequences can also be a criterion for choosing the optimum type
or mode of HEMT for a predetermined biasing condition.
Abstract: In this paper comparison of Reflector Antenna
analyzing techniques based on wave and ray nature of optics is
presented for an offset reflector antenna using GRASP (General
Reflector antenna Analysis Software Package) software. The results
obtained using PO (Physical Optics), PTD (Physical theory of
Diffraction), and GTD (Geometrical Theory of Diffraction) are
compared. The validity of PO and GTD techniques in regions around
the antenna, caustic behavior of GTD in main beam, and deviation of
GTD in case of near-in sidelobes of radiation pattern are discussed.
The comparison for far-out sidelobes predicted by PO, PO + PTD
and GTD is described. The effect of Direct Radiations from feed
which results in feed selection for the system is addressed.
Abstract: In the present analysis an unsteady laminar
forced convection water boundary layer flow is considered.
The fluid properties such as viscosity and Prandtl number are
taken as variables such that those are inversely proportional to
temperature. By using quasi-linearization technique the nonlinear
coupled partial differential equations are linearized and
the numerical solutions are obtained by using implicit finite
difference scheme with the appropriate selection of step sizes.
Non-similar solutions have been obtained from the starting
point of the stream-wise coordinate to the point where skin
friction value vanishes. The effect non-uniform mass transfer
along the surface of the cylinder through slot is studied on the
skin friction and heat transfer coefficients.
Abstract: This study include the effect of strain and storage
period and their interaction on some quantitative and qualitative traits
and percentages of the egg components in the eggs collected at the
start of production (at age 24 weeks). Eggs were divided into three
storage periods (1, 7 and 14) days under refrigerator temperature (5-
7)0C. Fifty seven eggs obtained randomly from each strain including
Isa Brown and Lohman White. General Linear Model within
SAS programme was used to analyze the collected data
and correlations between the studied traits were calculated for each
strain.Average egg weight (EW), Haugh Unit (HU), yolk index (YI),
yolk % (HP), albumin % (AP) and yolk to albumin ratio (YAR) was
56.629 gm, 87.968 %, 0.493, 22.13%, 67.74% and 32.76
respectively. Egg produced from ISA Brown surpassed those
produced by Lohman White significantly (P
Abstract: This paper discusses site selection process for
biological soil conservation planning. It was supported by a valuefocused
approach and spatial multi-criteria evaluation techniques. A
first set of spatial criteria was used to design a number of potential
sites. Next, a new set of spatial and non-spatial criteria was
employed, including the natural factors and the financial costs,
together with the degree of suitability for the Bonkuh watershed to
biological soil conservation planning and to recommend the most
acceptable program. The whole process was facilitated by a new
software tool that supports spatial multiple criteria evaluation, or
SMCE in GIS software (ILWIS). The application of this tool,
combined with a continual feedback by the public attentions, has
provided an effective methodology to solve complex decisional
problem in biological soil conservation planning.
Abstract: An alternative approach to the use of Discrete Fourier
Transform (DFT) for Magnetic Resonance Imaging (MRI) reconstruction
is the use of parametric modeling technique. This method
is suitable for problems in which the image can be modeled by
explicit known source functions with a few adjustable parameters.
Despite the success reported in the use of modeling technique as an
alternative MRI reconstruction technique, two important problems
constitutes challenges to the applicability of this method, these are
estimation of Model order and model coefficient determination. In
this paper, five of the suggested method of evaluating the model
order have been evaluated, these are: The Final Prediction Error
(FPE), Akaike Information Criterion (AIC), Residual Variance (RV),
Minimum Description Length (MDL) and Hannan and Quinn (HNQ)
criterion. These criteria were evaluated on MRI data sets based on the
method of Transient Error Reconstruction Algorithm (TERA). The
result for each criterion is compared to result obtained by the use of a
fixed order technique and three measures of similarity were evaluated.
Result obtained shows that the use of MDL gives the highest measure
of similarity to that use by a fixed order technique.
Abstract: A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.
Abstract: Evolvable hardware (EHW) is a developing field that
applies evolutionary algorithm (EA) to automatically design circuits,
antennas, robot controllers etc. A lot of research has been done in this
area and several different EAs have been introduced to tackle
numerous problems, as scalability, evolvability etc. However every
time a specific EA is chosen for solving a particular task, all its
components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade the
selection of the right parameters for the EA-s components for solving
different “test-problems" has been investigated. In this paper the
behaviour of mutation rate for designing logic circuits, which has not
been done before, has been deeply analyzed. The mutation rate for an
EHW system modifies the number of inputs of each logic gates, the
functionality (for example from AND to NOR) and the connectivity
between logic gates. The behaviour of the mutation has been
analyzed based on the number of generations, genotype redundancy
and number of logic gates for the evolved circuits. The experimental
results found provide the behaviour of the mutation rate during
evolution for the design and optimization of simple logic circuits.
The experimental results propose the best mutation rate to be used for
designing combinational logic circuits. The research presented is
particular important for those who would like to implement a
dynamic mutation rate inside the evolutionary algorithm for evolving
digital circuits. The researches on the mutation rate during the last 40
years are also summarized.
Abstract: In this paper, a new adaptive Fourier decomposition
(AFD) based time-frequency speech analysis approach is proposed.
Given the fact that the fundamental frequency of speech signals often
undergo fluctuation, the classical short-time Fourier transform (STFT)
based spectrogram analysis suffers from the difficulty of window size
selection. AFD is a newly developed signal decomposition theory. It is
designed to deal with time-varying non-stationary signals. Its
outstanding characteristic is to provide instantaneous frequency for
each decomposed component, so the time-frequency analysis becomes
easier. Experiments are conducted based on the sample sentence in
TIMIT Acoustic-Phonetic Continuous Speech Corpus. The results
show that the AFD based time-frequency distribution outperforms the
STFT based one.
Abstract: Procurement is an important component in the field of
operating resource management and e-procurement is the golden key
to optimizing the supply chains system. Global firms are optimistic
on the level of savings that can be achieved through full
implementation of e-procurement strategies. E-procurement is an
Internet-based business process for obtaining materials and services
and managing their inflow into the organization. In this paper, the
subjects of supply chains and e-procurement and its benefits to
organizations have been studied. Also, e-procurement in construction
and its drivers and barriers have been discussed and a framework of
supplier selection in an e-procurement environment has been
demonstrated. This paper also has addressed critical success factors
in adopting e-procurement in supply chains.
Abstract: Supplier selection, in real situation, is affected by
several qualitative and quantitative factors and is one of the most
important activities of purchasing department. Since at the time of
evaluating suppliers against the criteria or factors, decision makers
(DMS) do not have precise, exact and complete information, supplier
selection becomes more difficult. In this case, Grey theory helps us
to deal with this problem of uncertainty. Here, we apply Technique
for Order Preference by Similarity to Ideal Solution (TOPSIS)
method to evaluate and select the best supplier by using interval
fuzzy numbers. Through this article, we compare TOPSIS with some
other approaches and afterward demonstrate that the concept of
TOPSIS is very important for ranking and selecting right supplier.
Abstract: Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Abstract: Serial Analysis of Gene Expression is a powerful
quantification technique for generating cell or tissue gene expression
data. The profile of the gene expression of cell or tissue in several
different states is difficult for biologists to analyze because of the large
number of genes typically involved. However, feature selection in
machine learning can successfully reduce this problem. The method
allows reducing the features (genes) in specific SAGE data, and
determines only relevant genes. In this study, we used a genetic
algorithm to implement feature selection, and evaluate the
classification accuracy of the selected features with the K-nearest
neighbor method. In order to validate the proposed method, we used
two SAGE data sets for testing. The results of this study conclusively
prove that the number of features of the original SAGE data set can be
significantly reduced and higher classification accuracy can be
achieved.