Abstract: Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) equation of state (EOS) is a modified SAFT EOS with three pure component specific parameters: segment number (m), diameter (σ) and energy (ε). These PC-SAFT parameters need to be determined for each component under the conditions of interest by fitting experimental data, such as vapor pressure, density or heat capacity. PC-SAFT parameters for propane, ethylene and hydrogen in supercritical region were successfully estimated by fitting experimental density data available in literature. The regressed PCSAFT parameters were compared with the literature values by means of estimating pure component density and calculating average absolute deviation between the estimated and experimental density values. PC-SAFT parameters available in literature especially for ethylene and hydrogen estimated density in supercritical region reasonably well. However, the regressed PC-SAFT parameters performed better in supercritical region than the PC-SAFT parameters from literature.
Abstract: As part of national epidemiological survey on bovine
viral diarrhea virus (BVDV), a total of 274 dejecta samples were
collected from 14 cattle farms in 8 areas of Xinjiang Uygur
Autonomous Region in northwestern China. Total RNA was extracted
from each sample, and 5--untranslated region (UTR) of BVDV
genome was amplified by using two-step reverse
transcriptase-polymerase chain reaction (RT-PCR). The PCR products
were subsequently sequenced to study the genetic variations of BVDV
in these areas. Among the 274 samples, 33 samples were found
virus-positive. According to sequence analysis of the PCR products,
the 33 samples could be arranged into 16 groups. All the sequences,
however, were highly conserved with BVDV Osloss strains. The virus
possessed theses sequences belonged to BVDV-1b subtype by
phylogenetic analysis. Based on these data, we established a typing
tree for BVDV in these areas. Our results suggested that BVDV-1b
was a predominant subgenotype in northwestern China and no
correlation between the genetic and geographical distances could be
observed above the farm level.
Abstract: To learn about China-s future energy demand, this paper first proposed GM(1,1) model group based on recursive solutions of parameters estimation, setting up a general solving-algorithm of the model group. This method avoided the problems occurred on the past researches that remodeling, loss of information and large amount of calculation. This paper established respectively all-data-GM(1,1), metabolic GM(1,1) and new information GM (1,1)model according to the historical data of energy consumption in China in the year 2005-2010 and the added data of 2011, then modeling, simulating and comparison of accuracies we got the optimal models and to predict. Results showed that the total energy demand of China will be 37.2221 billion tons of equivalent coal in 2012 and 39.7973 billion tons of equivalent coal in 2013, which are as the same as the overall planning of energy demand in The 12th Five-Year Plan.
Abstract: Using logarithmic mean Divisia decomposition technique, this paper analyzes the change in industrial energy intensity of Fujian Province in China, based on data sets of added value and energy consumption for 35 selected industrial sub-sectors from 1999 to 2009. The change in industrial energy intensity is decomposed into intensity effect and structure effect. Results show that the industrial energy intensity of Fujian Province has achieved a reduction of 51% over the past ten years. The structural change, a shift in the mix of industrial sub-sectors, made overwhelming contribution to the reduction. The impact of energy efficiency’s improvement was relatively small. However, the aggregate industrial energy intensity was very sensitive to both the changes in energy intensity and in production share of energy-intensive sub-sectors, such as production and supply of electric power, steam and hot water. Pathway to reduce industrial energy intensity for energy conservation in Fujian Province is proposed in the end.
Abstract: This study is designed to investigate errors emerged in written texts produced by 30 Turkish EFL learners with an explanatory, and thus, qualitative perspective. Erroneous language elements were identified by the researcher first and then their grammaticality and intelligibility were checked by five native speakers of English. The analysis of the data showed that it is difficult to claim that an error stems from only one single factor since different features of an error are triggered by different factors. Our findings revealed two different types of errors: those which stem from the interference of L1 with L2 and those which are developmental ones. The former type contains more global errors whereas the errors in latter type are more intelligible.
Abstract: Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.
Abstract: This paper provides an in-depth study of Wireless
Sensor Network (WSN) application to monitor and control the
swiftlet habitat. A set of system design is designed and developed
that includes the hardware design of the nodes, Graphical User
Interface (GUI) software, sensor network, and interconnectivity for
remote data access and management. System architecture is proposed
to address the requirements for habitat monitoring. Such applicationdriven
design provides and identify important areas of further work
in data sampling, communications and networking. For this
monitoring system, a sensor node (MTS400), IRIS and Micaz radio
transceivers, and a USB interfaced gateway base station of Crossbow
(Xbow) Technology WSN are employed. The GUI of this monitoring
system is written using a Laboratory Virtual Instrumentation
Engineering Workbench (LabVIEW) along with Xbow Technology
drivers provided by National Instrument. As a result, this monitoring
system is capable of collecting data and presents it in both tables and
waveform charts for further analysis. This system is also able to send
notification message by email provided Internet connectivity is
available whenever changes on habitat at remote sites (swiftlet farms)
occur. Other functions that have been implemented in this system
are the database system for record and management purposes; remote
access through the internet using LogMeIn software. Finally, this
research draws a conclusion that a WSN for monitoring swiftlet
habitat can be effectively used to monitor and manage swiftlet
farming industry in Sarawak.
Abstract: In the recent past Learning Classifier Systems have
been successfully used for data mining. Learning Classifier System
(LCS) is basically a machine learning technique which combines
evolutionary computing, reinforcement learning, supervised or
unsupervised learning and heuristics to produce adaptive systems. A
LCS learns by interacting with an environment from which it
receives feedback in the form of numerical reward. Learning is
achieved by trying to maximize the amount of reward received. All
LCSs models more or less, comprise four main components; a finite
population of condition–action rules, called classifiers; the
performance component, which governs the interaction with the
environment; the credit assignment component, which distributes the
reward received from the environment to the classifiers accountable
for the rewards obtained; the discovery component, which is
responsible for discovering better rules and improving existing ones
through a genetic algorithm. The concatenate of the production rules
in the LCS form the genotype, and therefore the GA should operate
on a population of classifier systems. This approach is known as the
'Pittsburgh' Classifier Systems. Other LCS that perform their GA at
the rule level within a population are known as 'Mitchigan' Classifier
Systems. The most predominant representation of the discovered
knowledge is the standard production rules (PRs) in the form of IF P
THEN D. The PRs, however, are unable to handle exceptions and do
not exhibit variable precision. The Censored Production Rules
(CPRs), an extension of PRs, were proposed by Michalski and
Winston that exhibit variable precision and supports an efficient
mechanism for handling exceptions. A CPR is an augmented
production rule of the form: IF P THEN D UNLESS C, where
Censor C is an exception to the rule. Such rules are employed in
situations, in which conditional statement IF P THEN D holds
frequently and the assertion C holds rarely. By using a rule of this
type we are free to ignore the exception conditions, when the
resources needed to establish its presence are tight or there is simply
no information available as to whether it holds or not. Thus, the IF P
THEN D part of CPR expresses important information, while the
UNLESS C part acts only as a switch and changes the polarity of D
to ~D. In this paper Pittsburgh style LCSs approach is used for
automated discovery of CPRs. An appropriate encoding scheme is
suggested to represent a chromosome consisting of fixed size set of
CPRs. Suitable genetic operators are designed for the set of CPRs
and individual CPRs and also appropriate fitness function is proposed
that incorporates basic constraints on CPR. Experimental results are
presented to demonstrate the performance of the proposed learning
classifier system.
Abstract: In this paper, a new time-delay estimation
technique based on the cross IB-energy operator [5] is
introduced. This quadratic energy detector measures how
much a signal is present in another one. The location of the
peak of the energy operator, corresponding to the maximum of
interaction between the two signals, is the estimate of the
delay. The method is a fully data-driven approach. The
discrete version of the continuous-time form of the cross IBenergy
operator, for its implementation, is presented. The
effectiveness of the proposed method is demonstrated on real
underwater acoustic signals arriving from targets and the
results compared to the cross-correlation method.
Abstract: Human identification at a distance has recently gained
growing interest from computer vision researchers. Gait recognition
aims essentially to address this problem by identifying people based
on the way they walk [1]. Gait recognition has 3 steps. The first step
is preprocessing, the second step is feature extraction and the third
one is classification. This paper focuses on the classification step that
is essential to increase the CCR (Correct Classification Rate).
Multilayer Perceptron (MLP) is used in this work. Neural Networks
imitate the human brain to perform intelligent tasks [3].They can
represent complicated relationships between input and output and
acquire knowledge about these relationships directly from the data
[2]. In this paper we apply MLP NN for 11 views in our database and
compare the CCR values for these views. Experiments are performed
with the NLPR databases, and the effectiveness of the proposed
method for gait recognition is demonstrated.
Abstract: Data objects are usually organized hierarchically, and
the relations between them are analyzed based on a corresponding
concept hierarchy. The relation between data objects, for example how
similar they are, are usually analyzed based on the conceptual distance
in the hierarchy. If a node is an ancestor of another node, it is enough
to analyze how close they are by calculating the distance vertically.
However, if there is not such relation between two nodes, the vertical
distance cannot express their relation explicitly. This paper tries to fill
this gap by improving the analysis method for data objects based on
hierarchy. The contributions of this paper include: (1) proposing an
improved method to evaluate the vertical distance between concepts;
(2) defining the concept horizontal distance and a method to calculate
the horizontal distance; and (3) discussing the methods to confine a
range by the horizontal distance and the vertical distance, and
evaluating the relation between concepts.
Abstract: The temporal nature of negative selection is an under exploited area. In a negative selection system, newly generated antibodies go through a maturing phase, and the survivors of the phase then wait to be activated by the incoming antigens after certain number of matches. These without having enough matches will age and die, while these with enough matches (i.e., being activated) will become active detectors. A currently active detector may also age and die if it cannot find any match in a pre-defined (lengthy) period of time. Therefore, what matters in a negative selection system is the dynamics of the involved parties in the current time window, not the whole time duration, which may be up to eternity. This property has the potential to define the uniqueness of negative selection in comparison with the other approaches. On the other hand, a negative selection system is only trained with “normal" data samples. It has to learn and discover unknown “abnormal" data patterns on the fly by itself. Consequently, it is more appreciate to utilize negation selection as a system for pattern discovery and recognition rather than just pattern recognition. In this paper, we study the potential of using negative selection in discovering unknown temporal patterns.
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: In recent years, scanning probe atomic force
microscopy SPM AFM has gained acceptance over a wide spectrum
of research and science applications. Most fields focuses on physical,
chemical, biological while less attention is devoted to manufacturing
and machining aspects. The purpose of the current study is to assess
the possible implementation of the SPM AFM features and its
NanoScope software in general machining applications with special
attention to the tribological aspects of cutting tool. The surface
morphology of coated and uncoated as-received carbide inserts is
examined, analyzed, and characterized through the determination of
the appropriate scanning setting, the suitable data type imaging
techniques and the most representative data analysis parameters
using the MultiMode SPM AFM in contact mode. The NanoScope
operating software is used to capture realtime three data types
images: “Height", “Deflection" and “Friction". Three scan sizes are
independently performed: 2, 6, and 12 μm with a 2.5 μm vertical
range (Z). Offline mode analysis includes the determination of three
functional topographical parameters: surface “Roughness", power
spectral density “PSD" and “Section". The 12 μm scan size in
association with “Height" imaging is found efficient to capture every
tiny features and tribological aspects of the examined surface. Also,
“Friction" analysis is found to produce a comprehensive explanation
about the lateral characteristics of the scanned surface. Configuration
of many surface defects and drawbacks has been precisely detected
and analyzed.
Abstract: We have previously introduced an ultrasonic imaging
approach that combines harmonic-sensitive pulse sequences with a
post-beamforming quadratic kernel derived from a second-order
Volterra filter (SOVF). This approach is designed to produce images
with high sensitivity to nonlinear oscillations from microbubble
ultrasound contrast agents (UCA) while maintaining high levels of
noise rejection. In this paper, a two-step algorithm for computing the
coefficients of the quadratic kernel leading to reduction of tissue
component introduced by motion, maximizing the noise rejection and
increases the specificity while optimizing the sensitivity to the UCA
is presented. In the first step, quadratic kernels from individual
singular modes of the PI data matrix are compared in terms of their
ability of maximize the contrast to tissue ratio (CTR). In the second
step, quadratic kernels resulting in the highest CTR values are
convolved. The imaging results indicate that a signal processing
approach to this clinical challenge is feasible.
Abstract: This paper discusses the designing of knowledge
integration of clinical information extracted from distributed medical
ontologies in order to ameliorate a machine learning-based multilabel
coding assignment system. The proposed approach is
implemented using a decision tree technique of the machine learning
on the university hospital data for patients with Coronary Heart
Disease (CHD). The preliminary results obtained show a satisfactory
finding that the use of medical ontologies improves the overall
system performance.
Abstract: Fungal infections are becoming more common and the
range of susceptible individuals has expanded. While Candida
albicans remains the most common infective species, other Candida
spp. are becoming increasingly significant. In a range of large-scale
studies of candidaemia between 1999 and 2006, about 52% of 9717
cases involved C. albicans, about 30% involved either C. glabrata or
C. parapsilosis and less than 15% involved C. tropicalis, C. krusei or
C. guilliermondii. However, the probability of mortality within 30
days of infection with a particular species was at least 40% for C.
tropicalis, C. albicans, C. glabrata and C. krusei and only 22% for
C. parapsilopsis. Clinical isolates of Candida spp. grew at rates
ranging from 1.65 h-1 to 4.9 h-1. Three species (C. krusei, C. albicans
and C. glabrata) had relatively high growth rates (μm > 4 h-1), C.
tropicalis and C. dubliniensis grew moderately quickly (Ôëê 3 h-1) and
C. parapsilosis and C. guilliermondii grew slowly (< 2 h-1). Based
on these data, the log of the odds of mortality within 30 days of
diagnosis was linearly related to μm. From this the underlying
probability of mortality is 0.13 (95% CI: 0.10-0.17) and it increases
by about 0.09 ± 0.02 for each unit increase in μm. Given that the
overall crude mortality is about 0.36, the growth of Candida spp.
approximately doubles the rate, consistent with the results of larger
case-matched studies of candidaemia.
Abstract: The extraction of meaningful information from image
could be an alternative method for time series analysis. In this paper,
we propose a graphical analysis of time series grouped into table
with adjusted colour scale for numerical values. The advantages of
this method are also discussed. The proposed method is easy to
understand and is flexible to implement the standard methods of
pattern recognition and verification, especially for noisy
environmental data.
Abstract: The proposed system identifies the species of the wood
using the textural features present in its barks. Each species of a wood
has its own unique patterns in its bark, which enabled the proposed
system to identify it accurately. Automatic wood recognition system
has not yet been well established mainly due to lack of research in this
area and the difficulty in obtaining the wood database. In our work, a
wood recognition system has been designed based on pre-processing
techniques, feature extraction and by correlating the features of those
wood species for their classification. Texture classification is a problem
that has been studied and tested using different methods due to its
valuable usage in various pattern recognition problems, such as wood
recognition, rock classification. The most popular technique used
for the textural classification is Gray-level Co-occurrence Matrices
(GLCM). The features from the enhanced images are thus extracted
using the GLCM is correlated, which determines the classification
between the various wood species. The result thus obtained shows a
high rate of recognition accuracy proving that the techniques used in
suitable to be implemented for commercial purposes.
Abstract: This study was carried out in Ankara, the capital city of Turkey, in order to determine how people living in the slums of Ankara benefit from educational equality. Within the scope of the research, interviews were made with 64 families whose children have been getting education from the primary schools of these parts and the data of the study was collected by the researcher. The results of the research demonstrate that the children getting education in the slums of Ankara can not experience educational equality and justice. The results of this study show that the opportunities of the schools in the slums of Ankara are very limited, so the individuals in these districts can not equally benefit from the education. The families are aware of the problem they are faced with. KeywordsDiscrimination, inequality, primary education, slums of Turkey.