Abstract: In the recent past Learning Classifier Systems have
been successfully used for data mining. Learning Classifier System
(LCS) is basically a machine learning technique which combines
evolutionary computing, reinforcement learning, supervised or
unsupervised learning and heuristics to produce adaptive systems. A
LCS learns by interacting with an environment from which it
receives feedback in the form of numerical reward. Learning is
achieved by trying to maximize the amount of reward received. All
LCSs models more or less, comprise four main components; a finite
population of condition–action rules, called classifiers; the
performance component, which governs the interaction with the
environment; the credit assignment component, which distributes the
reward received from the environment to the classifiers accountable
for the rewards obtained; the discovery component, which is
responsible for discovering better rules and improving existing ones
through a genetic algorithm. The concatenate of the production rules
in the LCS form the genotype, and therefore the GA should operate
on a population of classifier systems. This approach is known as the
'Pittsburgh' Classifier Systems. Other LCS that perform their GA at
the rule level within a population are known as 'Mitchigan' Classifier
Systems. The most predominant representation of the discovered
knowledge is the standard production rules (PRs) in the form of IF P
THEN D. The PRs, however, are unable to handle exceptions and do
not exhibit variable precision. The Censored Production Rules
(CPRs), an extension of PRs, were proposed by Michalski and
Winston that exhibit variable precision and supports an efficient
mechanism for handling exceptions. A CPR is an augmented
production rule of the form: IF P THEN D UNLESS C, where
Censor C is an exception to the rule. Such rules are employed in
situations, in which conditional statement IF P THEN D holds
frequently and the assertion C holds rarely. By using a rule of this
type we are free to ignore the exception conditions, when the
resources needed to establish its presence are tight or there is simply
no information available as to whether it holds or not. Thus, the IF P
THEN D part of CPR expresses important information, while the
UNLESS C part acts only as a switch and changes the polarity of D
to ~D. In this paper Pittsburgh style LCSs approach is used for
automated discovery of CPRs. An appropriate encoding scheme is
suggested to represent a chromosome consisting of fixed size set of
CPRs. Suitable genetic operators are designed for the set of CPRs
and individual CPRs and also appropriate fitness function is proposed
that incorporates basic constraints on CPR. Experimental results are
presented to demonstrate the performance of the proposed learning
classifier system.
Abstract: This paper presents the results of the experimental
tests of the cooling performance of a 12,000-Btu/h modified air
conditioner (referred to as M-AC) that use the ground as a heat sink
of a condenser. In the tests, cooling capacity of M-AC with an
optimal length of a condensing coil as well as life expectancy of
copper coil buried underground were investigated. The lengths of
copper coil fabricated and used as condenser coil of M-AC were set
at 67, 50, 40 and 30 m whereas that of a 12,000-Btu/h conventional
split-type air conditioner (referred to as C-AC) was about 22 m. The
results showed that the ground can absorb heat rejected from a
condenser of M-AC. The coefficient of performance (COP) of C-AC
was about 2.5 whereas those of M-AC were found to be higher. It
was found that the values of COP of M-AC with condensing coils of
67, 50 and 40 m long were about 6.9, 5.5 and 3.3, respectively, while
that of 30-m-long one was found to be about 2.1. The electrical
consumptions of M-AC were found lower than that of C-AC in the
range of 11.5 – 15.5%. Additionally, life expectancy of underground
condensing coil of M-AC was found to be over 7 years.
Abstract: Let R be a ring and n a fixed positive integer, we
investigate the properties of n-strongly Gorenstein projective, injective
and flat modules. Using the homological theory , we prove that
the tensor product of an n-strongly Gorenstein projective (flat) right
R -module and projective (flat) left R-module is also n-strongly
Gorenstein projective (flat). Let R be a coherent ring ,we prove that
the character module of an n -strongly Gorenstein flat left R -module
is an n-strongly Gorenstein injective right R -module . At last, let
R be a commutative ring and S a multiplicatively closed set of R ,
we establish the relation between n -strongly Gorenstein projective
(injective , flat ) R -modules and n-strongly Gorenstein projective
(injective , flat ) S−1R-modules. All conclusions in this paper is
helpful for the research of Gorenstein dimensions in future.
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: In recent years, scanning probe atomic force
microscopy SPM AFM has gained acceptance over a wide spectrum
of research and science applications. Most fields focuses on physical,
chemical, biological while less attention is devoted to manufacturing
and machining aspects. The purpose of the current study is to assess
the possible implementation of the SPM AFM features and its
NanoScope software in general machining applications with special
attention to the tribological aspects of cutting tool. The surface
morphology of coated and uncoated as-received carbide inserts is
examined, analyzed, and characterized through the determination of
the appropriate scanning setting, the suitable data type imaging
techniques and the most representative data analysis parameters
using the MultiMode SPM AFM in contact mode. The NanoScope
operating software is used to capture realtime three data types
images: “Height", “Deflection" and “Friction". Three scan sizes are
independently performed: 2, 6, and 12 μm with a 2.5 μm vertical
range (Z). Offline mode analysis includes the determination of three
functional topographical parameters: surface “Roughness", power
spectral density “PSD" and “Section". The 12 μm scan size in
association with “Height" imaging is found efficient to capture every
tiny features and tribological aspects of the examined surface. Also,
“Friction" analysis is found to produce a comprehensive explanation
about the lateral characteristics of the scanned surface. Configuration
of many surface defects and drawbacks has been precisely detected
and analyzed.
Abstract: There are several approaches for handling multiclass classification. Aside from one-against-one (OAO) and one-against-all (OAA), hierarchical classification technique is also commonly used. A binary classification tree is a hierarchical classification structure that breaks down a k-class problem into binary sub-problems, each solved by a binary classifier. In each node, a set of classes is divided into two subsets. A good class partition should be able to group similar classes together. Many algorithms measure similarity in term of distance between class centroids. Classes are grouped together by a clustering algorithm when distances between their centroids are small. In this paper, we present a binary classification tree with tuned observation-based clustering (BCT-TOB) that finds a class partition by performing clustering on observations instead of class centroids. A merging step is introduced to merge any insignificant class split. The experiment shows that performance of BCT-TOB is comparable to other algorithms.
Abstract: Information sharing and exchange, rather than
information processing, is what characterizes information
technology in the 21st century. Ontologies, as shared common
understanding, gain increasing attention, as they appear as the
most promising solution to enable information sharing both at
a semantic level and in a machine-processable way. Domain
Ontology-based modeling has been exploited to provide
shareability and information exchange among diversified,
heterogeneous applications of enterprises.
Contextual ontologies are “an explicit specification of
contextual conceptualization". That is: ontology is
characterized by concepts that have multiple representations
and they may exist in several contexts. Hence, contextual
ontologies are a set of concepts and relationships, which are
seen from different perspectives. Contextualization is to allow
for ontologies to be partitioned according to their contexts.
The need for contextual ontologies in enterprise modeling
has become crucial due to the nature of today's competitive
market. Information resources in enterprise is distributed and
diversified and is in need to be shared and communicated
locally through the intranet and globally though the internet.
This paper discusses the roles that ontologies play in an
enterprise modeling, and how ontologies assist in building a
conceptual model in order to provide communicative and
interoperable information systems. The issue of enterprise
modeling based on contextual domain ontology is also
investigated, and a framework is proposed for an enterprise
model that consists of various applications.
Abstract: High quality requirements analysis is one of the most
crucial activities to ensure the success of a software project, so that
requirements verification for software system becomes more and more
important in Requirements Engineering (RE) and it is one of the most
helpful strategies for improving the quality of software system.
Related works show that requirement elicitation and analysis can be
facilitated by ontological approaches and semantic web technologies.
In this paper, we proposed a hybrid method which aims to verify
requirements with structural and formal semantics to detect
interactions. The proposed method is twofold: one is for modeling
requirements with the semantic web language OWL, to construct a
semantic context; the other is a set of interaction detection rules which
are derived from scenario-based analysis and represented with
semantic web rule language (SWRL). SWRL based rules are working
with rule engines like Jess to reason in semantic context for
requirements thus to detect interactions. The benefits of the proposed
method lie in three aspects: the method (i) provides systematic steps
for modeling requirements with an ontological approach, (ii) offers
synergy of requirements elicitation and domain engineering for
knowledge sharing, and (3)the proposed rules can systematically assist
in requirements interaction detection.
Abstract: This paper describes an optimal approach for feature
subset selection to classify the leaves based on Genetic Algorithm
(GA) and Kernel Based Principle Component Analysis (KPCA). Due
to high complexity in the selection of the optimal features, the
classification has become a critical task to analyse the leaf image
data. Initially the shape, texture and colour features are extracted
from the leaf images. These extracted features are optimized through
the separate functioning of GA and KPCA. This approach performs
an intersection operation over the subsets obtained from the
optimization process. Finally, the most common matching subset is
forwarded to train the Support Vector Machine (SVM). Our
experimental results successfully prove that the application of GA
and KPCA for feature subset selection using SVM as a classifier is
computationally effective and improves the accuracy of the classifier.
Abstract: Creep stresses and strain rates have been obtained
for a thin rotating disc having variable density with inclusion by
using Seth-s transition theory. The density of the disc is assumed to
vary radially, i.e. ( ) 0 ¤ü ¤ü r/b m - = ; ¤ü 0 and m being real positive
constants. It has been observed that a disc, whose density increases
radially, rotates at higher angular speed, thus decreasing the
possibility of a fracture at the bore, whereas for a disc whose
density decreases radially, the possibility of a fracture at the bore
increases.
Abstract: The hypercube Qn is one of the most well-known
and popular interconnection networks and the k-ary n-cube Qk
n is
an enlarged family from Qn that keeps many pleasing properties
from hypercubes. In this article, we study the panpositionable
hamiltonicity of Qk
n for k ≥ 3 and n ≥ 2. Let x, y of V (Qk
n)
be two arbitrary vertices and C be a hamiltonian cycle of Qk
n.
We use dC(x, y) to denote the distance between x and y on the
hamiltonian cycle C. Define l as an integer satisfying d(x, y) ≤ l ≤ 1
2 |V (Qk
n)|. We prove the followings:
• When k = 3 and n ≥ 2, there exists a hamiltonian cycle C
of Qk
n such that dC(x, y) = l.
• When k ≥ 5 is odd and n ≥ 2, we request that l /∈ S
where S is a set of specific integers. Then there exists a
hamiltonian cycle C of Qk
n such that dC(x, y) = l.
• When k ≥ 4 is even and n ≥ 2, we request l-d(x, y) to be
even. Then there exists a hamiltonian cycle C of Qk
n such
that dC(x, y) = l.
The result is optimal since the restrictions on l is due to the
structure of Qk
n by definition.
Abstract: The group mutual exclusion (GME) problem is a
variant of the mutual exclusion problem. In the present paper a
token-based group mutual exclusion algorithm, capable of handling
transient faults, is proposed. The algorithm uses the concept of
dynamic request sets. A time out mechanism is used to detect the
token loss; also, a distributed scheme is used to regenerate the token.
The worst case message complexity of the algorithm is n+1. The
maximum concurrency and forum switch complexity of the
algorithm are n and min (n, m) respectively, where n is the number of
processes and m is the number of groups. The algorithm also satisfies
another desirable property called smooth admission. The scheme can
also be adapted to handle the extended group mutual exclusion
problem.
Abstract: This paper describes a low-voltage and low-power
channel selection analog front end with continuous-time low pass
filters and highly linear programmable gain amplifier (PGA). The
filters were realized as balanced Gm-C biquadratic filters to achieve a
low current consumption. High linearity and a constant wide
bandwidth are achieved by using a new transconductance (Gm) cell.
The PGA has a voltage gain varying from 0 to 65dB, while
maintaining a constant bandwidth. A filter tuning circuit that requires
an accurate time base but no external components is presented.
With a 1-Vrms differential input and output, the filter achieves
-85dB THD and a 78dB signal-to-noise ratio. Both the filter and PGA
were implemented in a 0.18um 1P6M n-well CMOS process. They
consume 3.2mW from a 1.8V power supply and occupy an area of
0.19mm2.
Abstract: Although agriculture is an important part of the world
economy, accounting in agriculture still has many shortcomings. The
adoption of IAS 41 “Agriculture” has tried to improve this situation
and increase the comparability of financial statements of entities in
the agricultural sector. Although controversial, IAS 41 is the first
step of a consistent transition to fair value assessment in the
agricultural sector. The objective of our work is the analysis of IAS
41 and current accounting agricultural situation in Romania.
Accounting regulations in Romania are in accordance with European
directives and, in many respects, converged with IFRS referential.
Provisions of IAS 41, however, are not reflected directly in
Romanian regulations. With the increase of forest land transactions,
it is expected that recognition and measurement of biological assets
under IAS 41 to become a necessity.
Abstract: This paper presents a methodology towards the emulation of the electrical power consumption of the RF device during the cellular phone/handset transmission mode using the LTE technology. The emulation methodology takes the physical environmental variables and the logical interface between the baseband and the RF system as inputs to compute the emulated power dissipation of the RF device. The emulated power, in between the measured points corresponding to the discrete values of the logical interface parameters is computed as a polynomial interpolation using polynomial basis functions. The evaluation of polynomial and spline curve fitting models showed a respective divergence (test error) of 8% and 0.02% from the physically measured power consumption. The precisions of the instruments used for the physical measurements have been modeled as intervals. We have been able to model the power consumption of the RF device operating at 5MHz using homotopy between 2 continuous power consumptions of the RF device operating at the bandwidths 3MHz and 10MHz.
Abstract: The objective of this research was to investigate the efficiency of the light emitting diode (LED) tube in various color lights used to lure the adult coconut hispine beetle. The research was conducted by setting the forward bias on LED tubes, and the next step was to test luminous efficacy and quantity of electricity used to power each LED tube in different color lights. Finally, the researcher examined the efficiency of each color-light LED tube to lure the adult coconut hispine beetle.
The results showed that the ultraviolet LED tubes had the most capacity to allure the adult coconut hispine beetles with the percentage of 82.92, followed by the blue LED tubes with the percentage of 59.76. Whereas the yellow, pink, red and warm white LED tubes had no influence to the adult coconut hispine beetles.
Abstract: During the year 1999, Serbia (ex Yugoslavia) and their northern province, Vojvodina, has been bombarded. Because of that general public believe is that this region was contaminated by depleted uranium and that there is a potential contaminant of agricultural products due to soil radioactivity. This paper presents the repeated analysis of agricultural soil samples in Vojvodina. The same investigation was carried out during the year 2001, and it was concluded that, based on the gamma-spectrometric analysis of 50 soil samples taken from the region of Vojvodina, there haven-t been registered any increase of radioactivity that could endanger the food production. We continue with the monitoring of this region. The comparison between those two sets of results is presented.
Abstract: This study proposes a materials procurement contracts
model to which the zero-cost collar option is applied for heading price
fluctuation risks in construction.The material contract model based on
the collar option that consists of the call option striking zone of the
construction company(the buyer) following the materials price
increase andthe put option striking zone of the material vendor(the
supplier) following a materials price decrease. This study first
determined the call option strike price Xc of the construction company
by a simple approach: it uses the predicted profit at the project starting
point and then determines the strike price of put option Xp that has an
identical option value, which completes the zero-cost material
contract.The analysis results indicate that the cost saving of the
construction company increased as Xc decreased. This was because the
critical level of the steel materials price increasewas set at a low level.
However, as Xc decreased, Xpof a put option that had an identical
option value gradually increased. Cost saving increased as Xc
decreased. However, as Xp gradually increased, the risk of loss from a
construction company increased as the steel materials price decreased.
Meanwhile, cost saving did not occur for the construction company,
because of volatility. This result originated in the zero-cost features of
the two-way contract of the collar option. In the case of the regular
one-way option, the transaction cost had to be subtracted from the cost
saving. The transaction cost originated from an option value that
fluctuated with the volatility. That is, the cost saving of the one-way
option was affected by the volatility. Meanwhile, even though the
collar option with zero transaction cost cut the connection between
volatility and cost saving, there was a risk of exercising the put option.
Abstract: In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Abstract: In this work, we are interested in developing a speech denoising tool by using a discrete wavelet packet transform (DWPT). This speech denoising tool will be employed for applications of recognition, coding and synthesis. For noise reduction, instead of applying the classical thresholding technique, some wavelet packet nodes are set to zero and the others are thresholded. To estimate the non stationary noise level, we employ the spectral entropy. A comparison of our proposed technique to classical denoising methods based on thresholding and spectral subtraction is made in order to evaluate our approach. The experimental implementation uses speech signals corrupted by two sorts of noise, white and Volvo noises. The obtained results from listening tests show that our proposed technique is better than spectral subtraction. The obtained results from SNR computation show the superiority of our technique when compared to the classical thresholding method using the modified hard thresholding function based on u-law algorithm.