Abstract: In this paper, getting an high-efficiency parallel algorithm to solve sparse block pentadiagonal linear systems suitable for vectors and parallel processors, stair matrices are used to construct some parallel polynomial approximate inverse preconditioners. These preconditioners are appropriate when the desired target is to maximize parallelism. Moreover, some theoretical results about these preconditioners are presented and how to construct preconditioners effectively for any nonsingular block pentadiagonal H-matrices is also described. In addition, the availability of these preconditioners is illustrated with some numerical experiments arising from two dimensional biharmonic equation.
Abstract: Discrete wavelet transform (DWT) has been widely adopted in biomedical signal processing for denoising, compression
and so on. Choosing a suitable decomposition level (DL) in DWT is of paramount importance to its performance. In this paper, we propose to exploit sparseness of the transformed signals to determine the appropriate DL. Simulation results have shown that the sparseness of transformed signals after DWT increases with the increasing DLs. Additional Monte-Carlo simulation results have verified the effectiveness of sparseness measure in determining the DL.
Abstract: Text categorization is the problem of classifying text
documents into a set of predefined classes. After a preprocessing
step, the documents are typically represented as large sparse vectors.
When training classifiers on large collections of documents, both the
time and memory restrictions can be quite prohibitive. This justifies
the application of feature selection methods to reduce the
dimensionality of the document-representation vector. In this paper,
we present three feature selection methods: Information Gain,
Support Vector Machine feature selection called (SVM_FS) and
Genetic Algorithm with SVM (called GA_SVM). We show that the
best results were obtained with GA_SVM method for a relatively
small dimension of the feature vector.
Abstract: Paleoclimate was reconstructed by the clay mineral
assemblages of shale units of Pabdeh (Paleocene- Oligocene), Gurpi
(Upper Cretaceous), Kazhdumi (Albian-Cenomanian) and Gadvan
(Aptian-Neocomian) formations in the Bangestan anticline. To
compare with clay minerals assemblages in these formations,
selected samples also taken from available formations in drilled wells
in Ahvaz, Marun, Karanj, and Parsi oil fields. Collected samples
prepared using standard clay mineral methodology. They were
treated as normal, glycolated and heated oriented glass slides. Their
identification was made on X-Ray diffractographs. Illite % varies
from 8 to 36. Illite quantity increased from Pabdeh to Gurpi
Formation. This may be due to dominant dry climate. Kaolinite is in
range of 12-49%. Its variation style in different formations could be a
marker of climate changes from wet to dry which is supported by the
lithological changes. Chlorite (4-28%) can also be detected in those
samples without any kaolinite. Mixed layer minerals as the mixture
of illite-chlorite and illite-vermiculite-montmorillonite are varied
from 6 to 36%, decreased during Kazhdumi deposition from the base
to the top. This result may be according to decreasing of illite
leaching process. Vermiculite was also determined in very less
quantity and found in those units without kaolinite. Montmorillonite
varies from 8 to 43%, and its presence is due to terrestrial
depositional condition. Stratigraphical documents is also supported
this idea that clay mineral distribution is a function of the climate
changes. It seems, thus, the present results can be indicated a possible
procedure for ancient climate changes evaluation.
Abstract: This paper proposed classification models that would
be used as a proxy for hard disk drive (HDD) functional test equitant
which required approximately more than two weeks to perform the
HDD status classification in either “Pass" or “Fail". These models
were constructed by using committee network which consisted of a
number of single neural networks. This paper also included the
method to solve the problem of sparseness data in failed part, which
was called “enforce learning method". Our results reveal that the
constructed classification models with the proposed method could
perform well in the sparse data conditions and thus the models,
which used a few seconds for HDD classification, could be used to
substitute the HDD functional tests.
Abstract: Natural language processing systems pose a unique
challenge for software architectural design as system complexity has
increased continually and systems cannot be easily constructed from
loosely coupled modules. Lexical, syntactic, semantic, and pragmatic
aspects of linguistic information are tightly coupled in a manner that
requires separation of concerns in a special way in design,
implementation and maintenance. An aspect oriented software
architecture is proposed in this paper after critically reviewing
relevant architectural issues. For the purpose of this paper, the
syntactic aspect is characterized by an augmented context-free
grammar. The semantic aspect is composed of multiple perspectives
including denotational, operational, axiomatic and case frame
approaches. Case frame semantics matured in India from deep
thematic analysis. It is argued that lexical, syntactic, semantic and
pragmatic aspects work together in a mutually dependent way and
their synergy is best represented in the aspect oriented approach. The
software architecture is presented with an augmented Unified
Modeling Language.
Abstract: In this paper, we propose a method of resolving dependency ambiguities of Korean subordinate clauses based on Support Vector Machines (SVMs). Dependency analysis of clauses is well known to be one of the most difficult tasks in parsing sentences, especially in Korean. In order to solve this problem, we assume that the dependency relation of Korean subordinate clauses is the dependency relation among verb phrase, verb and endings in the clauses. As a result, this problem is represented as a binary classification task. In order to apply SVMs to this problem, we selected two kinds of features: static and dynamic features. The experimental results on STEP2000 corpus show that our system achieves the accuracy of 73.5%.
Abstract: Machine Translation (MT) between the Thai and English languages has been a challenging research topic in natural language processing. Most research has been done on English to Thai machine translation, but not the other way around. This paper presents a Thai to English Machine Translation System that translates a Thai sentence into interlingua of a Thai LFG tree using LFG grammar and a bottom up parser. The Thai LFG tree is then transformed into the corresponding English LFG tree by pattern matching and node transformation. Finally, an equivalent English sentence is created using structural information prescribed by the English LFG tree. Based on results of experiments designed to evaluate the performance of the proposed system, it can be stated that the system has been proven to be effective in providing a useful translation from Thai to English.
Abstract: In the paper, a fast high-resolution range profile synthetic algorithm called orthogonal matching pursuit with sensing dictionary (OMP-SD) is proposed. It formulates the traditional HRRP synthetic to be a sparse approximation problem over redundant dictionary. As it employs a priori that the synthetic range profile (SRP) of targets are sparse, SRP can be accomplished even in presence of data lost. Besides, the computation complexity decreases from O(MNDK) flops for OMP to O(M(N + D)K) flops for OMP-SD by introducing sensing dictionary (SD). Simulation experiments illustrate its advantages both in additive white Gaussian noise (AWGN) and noiseless situation, respectively.
Abstract: In this paper we propose a novel method for human
face segmentation using the elliptical structure of the human head. It
makes use of the information present in the edge map of the image.
In this approach we use the fact that the eigenvalues of covariance
matrix represent the elliptical structure. The large and small
eigenvalues of covariance matrix are associated with major and
minor axial lengths of an ellipse. The other elliptical parameters are
used to identify the centre and orientation of the face. Since an
Elliptical Hough Transform requires 5D Hough Space, the Circular
Hough Transform (CHT) is used to evaluate the elliptical parameters.
Sparse matrix technique is used to perform CHT, as it squeeze zero
elements, and have only a small number of non-zero elements,
thereby having an advantage of less storage space and computational
time. Neighborhood suppression scheme is used to identify the valid
Hough peaks. The accurate position of the circumference pixels for
occluded and distorted ellipses is identified using Bresenham-s
Raster Scan Algorithm which uses the geometrical symmetry
properties. This method does not require the evaluation of tangents
for curvature contours, which are very sensitive to noise. The method
has been evaluated on several images with different face orientations.
Abstract: In this paper a comprehensive model of a fossil fueled
power plant (FFPP) is developed in order to evaluate the
performance of a newly designed turbine follower controller.
Considering the drawbacks of previous works, an overall model is
developed to minimize the error between each subsystem model
output and the experimental data obtained at the actual power plant.
The developed model is organized in two main subsystems namely;
Boiler and Turbine. Considering each FFPP subsystem
characteristics, different modeling approaches are developed. For
economizer, evaporator, superheater and reheater, first order models
are determined based on principles of mass and energy conservation.
Simulations verify the accuracy of the developed models. Due to the
nonlinear characteristics of attemperator, a new model, based on a
genetic-fuzzy systems utilizing Pittsburgh approach is developed
showing a promising performance vis-à-vis those derived with other
methods like ANFIS. The optimization constraints are handled
utilizing penalty functions. The effect of increasing the number of
rules and membership functions on the performance of the proposed
model is also studied and evaluated. The turbine model is developed
based on the equation of adiabatic expansion. Parameters of all
evaluated models are tuned by means of evolutionary algorithms.
Based on the developed model a fuzzy PI controller is developed. It
is then successfully implemented in the turbine follower control
strategy of the plant. In this control strategy instead of keeping
control parameters constant, they are adjusted on-line with regard to
the error and the error rate. It is shown that the response of the
system improves significantly. It is also shown that fuel consumption
decreases considerably.
Abstract: In syntactic pattern recognition a pattern can be
represented by a graph. Given an unknown pattern represented by
a graph g, the problem of recognition is to determine if the graph g
belongs to a language L(G) generated by a graph grammar G. The
so-called IE graphs have been defined in [1] for a description of
patterns. The IE graphs are generated by so-called ETPL(k) graph
grammars defined in [1]. An efficient, parsing algorithm for ETPL(k)
graph grammars for syntactic recognition of patterns represented by
IE graphs has been presented in [1]. In practice, structural
descriptions may contain pattern distortions, so that the assignment
of a graph g, representing an unknown pattern, to
a graph language L(G) generated by an ETPL(k) graph grammar G is
rejected by the ETPL(k) type parsing. Therefore, there is a need for
constructing effective parsing algorithms for recognition of distorted
patterns. The purpose of this paper is to present a new approach to
syntactic recognition of distorted patterns. To take into account all
variations of a distorted pattern under study, a probabilistic
description of the pattern is needed. A random IE graph approach is
proposed here for such a description ([2]).
Abstract: We have proposed an information filtering system
using index word selection from a document set based on the
topics included in a set of documents. This method narrows
down the particularly characteristic words in a document set
and the topics are obtained by Sparse Non-negative Matrix
Factorization. In information filtering, a document is often
represented with the vector in which the elements correspond
to the weight of the index words, and the dimension of the
vector becomes larger as the number of documents is
increased. Therefore, it is possible that useless words as index
words for the information filtering are included. In order to
address the problem, the dimension needs to be reduced. Our
proposal reduces the dimension by selecting index words
based on the topics included in a document set. We have
applied the Sparse Non-negative Matrix Factorization to the
document set to obtain these topics. The filtering is carried out
based on a centroid of the learning document set. The centroid
is regarded as the user-s interest. In addition, the centroid is
represented with a document vector whose elements consist of
the weight of the selected index words. Using the English test
collection MEDLINE, thus, we confirm the effectiveness of
our proposal. Hence, our proposed selection can confirm the
improvement of the recommendation accuracy from the other
previous methods when selecting the appropriate number of
index words. In addition, we discussed the selected index
words by our proposal and we found our proposal was able to
select the index words covered some minor topics included in
the document set.
Abstract: This study investigates the performance of radial basis function networks (RBFN) in forecasting the monthly CO2 emissions of an electric power utility. We also propose a method for input variable selection. This method is based on identifying the general relationships between groups of input candidates and the output. The effect that each input has on the forecasting error is examined by removing all inputs except the variable to be investigated from its group, calculating the networks parameter and performing the forecast. Finally, the new forecasting error is compared with the reference model. Eight input variables were identified as the most relevant, which is significantly less than our reference model with 30 input variables. The simulation results demonstrate that the model with the 8 inputs selected using the method introduced in this study performs as accurate as the reference model, while also being the most parsimonious.
Abstract: This paper proposed high level feature for online Lao handwritten recognition. This feature must be high level enough so that the feature is not change when characters are written by different persons at different speed and different proportion (shorter or longer stroke, head, tail, loop, curve). In this high level feature, a character is divided in to sequence of curve segments where a segment start where curve reverse rotation (counter clockwise and clockwise). In each segment, following features are gathered cumulative change in direction of curve (- for clockwise), cumulative curve length, cumulative length of left to right, right to left, top to bottom and bottom to top ( cumulative change in X and Y axis of segment). This feature is simple yet robust for high accuracy recognition. The feature can be gather from parsing the original time sampling sequence X, Y point of the pen location without re-sampling. We also experiment on other segmentation point such as the maximum curvature point which was widely used by other researcher. Experiments results show that the recognition rates are at 94.62% in comparing to using maximum curvature point 75.07%. This is due to a lot of variations of turning points in handwritten.
Abstract: Image retrieval is a topic where scientific interest is currently high. The important steps associated with image retrieval system are the extraction of discriminative features and a feasible similarity metric for retrieving the database images that are similar in content with the search image. Gabor filtering is a widely adopted technique for feature extraction from the texture images. The recently proposed sparsity promoting l1-norm minimization technique finds the sparsest solution of an under-determined system of linear equations. In the present paper, the l1-norm minimization technique as a similarity metric is used in image retrieval. It is demonstrated through simulation results that the l1-norm minimization technique provides a promising alternative to existing similarity metrics. In particular, the cases where the l1-norm minimization technique works better than the Euclidean distance metric are singled out.
Abstract: In the gas refineries of Iran-s South Pars Gas
Complex, Sulfrex demercaptanization process is used to remove
volatile and corrosive mercaptans from liquefied petroleum gases by
caustic solution. This process consists of two steps. Removing low
molecular weight mercaptans and regeneration exhaust caustic. Some
parameters such as LPG feed temperature, caustic concentration and
feed-s mercaptan in extraction step and sodium mercaptide content in
caustic, catalyst concentration, caustic temperature, air injection rate
in regeneration step are effective factors. In this paper was focused on
temperature factor that play key role in mercaptans extraction and
caustic regeneration. The experimental results demonstrated by
optimization of temperature, sodium mercaptide content in caustic
because of good oxidation minimized and sulfur impurities in
product reduced.
Abstract: An experiment was conducted in October 2008 due the ability replacement plant associate biofertilizers by chemical fertilizers and the qualifying rate of chemical N fertilizers at the moment of using this biofertilizers and the interaction of this biofertilizer on each other. This field experiment has been done in Persepolis (Throne of Jamshid) and arrange by using factorial with the basis of randomized complete block design, in three replication Azespirilium SP bacteria has been admixed with consistence 108 cfu/g and inoculated with seeds of wheat, The streptomyces SP has been used in amount of 550 gr/ha and concatenated on clay and for the qualifying range of chemical fertilizer 4 level of N chemical fertilizer from the source of urea (N0=0, N1=60, N2=120, N3=180) has been used in this experiment. The results indicated there were Significant differences between levels of Nitrogen fertilizer in the entire characteristic which has been measured in this experiment. The admixed Azespirilium SP showed significant differences between their levels in the characteristics such as No. of fertile ear, No. of grain per ear, grain yield, grain protein percentage, leaf area index and the agronomic fertilizer use efficiency. Due the interaction streptomyses with Azespirilium SP bacteria this actinomycet didn-t show any statistically significant differences between it levels.
Abstract: This paper proposes a new technique to detect code
clones from the lexical and syntactic point of view, which is based
on PALEX source code representation. The PALEX code contains
the recorded parsing actions and also lexical formatting information
including white spaces and comments. We can record a list of parsing
actions (shift, reduce, and reading a token) during a compiling process
after a compiler finishes analyzing the source code. The proposed
technique has advantages for syntax sensitive approach and language
independency.
Abstract: Vinegar or sour wine is a product of alcoholic and
subsequent acetous fermentation of sugary precursors derived from
several fruits or starchy substrates. This delicious food additive and
supplement contains not less than 4 grams of acetic acid in 100 cubic
centimeters at 20°C. Among the large number of bacteria that are
able to produce acetic acid, only few genera are used in vinegar
industry most significant of which are Acetobacter and
Gluconobacter. In this research we isolated and identified an
Acetobacter strain from Iranian apricot, a very delicious and sensitive
summer fruit to decay, we gathered from fruit's stores in Isfahan,
Iran. The main culture media we used were Carr, GYC, Frateur and
an industrial medium for vinegar production. We isolated this strain
using a novel miniature fermentor we made at Pars Yeema
Biotechnologists Co., Isfahan Science and Technology Town (ISTT),
Isfahan, Iran. The microscopic examinations of isolated strain from
Iranian apricot showed gram negative rods to cocobacilli. Their
catalase reaction was positive and oxidase reaction was negative and
could ferment ethanol to acetic acid. Also it showed an acceptable
growth in 5%, 7% and 9% ethanol concentrations at 30°C using
modified Carr media after 24, 48 and 96 hours incubation
respectively. According to its tolerance against high concentrations of
ethanol after four days incubation and its high acetic acid production,
8.53%, after 144 hours, this strain could be considered as a suitable
industrial strain for a production of a new type of vinegar, apricot
vinegar, with a new and delicious taste. In conclusion this is the first
report of isolation and identification of an Acetobacter strain from
Iranian apricot with a very good tolerance against high ethanol
concentrations as well as high acetic acid productivity in an
acceptable incubation period of time industrially. This strain could be
used in vinegar industry to convert apricot spoilage to a beneficiary
product and mentioned characteristics have made it as an amenable
strain in food and agricultural biotechnology.