Abstract: In this paper, we propose a new modular approach called neuroglial consisting of two neural networks slow and fast which emulates a biological reality recently discovered. The implementation is based on complex multi-time scale systems; validation is performed on the model of the asynchronous machine. We applied the geometric approach based on the Gerschgorin circles for the decoupling of fast and slow variables, and the method of singular perturbations for the development of reductions models.
This new architecture allows for smaller networks with less complexity and better performance in terms of mean square error and convergence than the single network model.
Abstract: Background, measuring an individual-s Health
Literacy is gaining attention, yet no appropriate instrument is available
in Taiwan. Measurement tools that were developed and used in
western countries may not be appropriate for use in Taiwan due to a
different language system. Purpose of this research was to develop a
Health Literacy measurement instrument specific for Taiwan adults.
Methods, several experts of clinic physicians; healthcare
administrators and scholars identified 125 common used health related
Chinese phrases from major medical knowledge sources that easy
accessible to the public. A five-point Likert scale is used to measure
the understanding level of the target population. Such measurement is
then used to compare with the correctness of their answers to a health
knowledge test for validation. Samples, samples under study were
purposefully taken from four groups of people in the northern
Pingtung, OPD patients, university students, community residents,
and casual visitors to the central park. A set of health knowledge index
with 10 questions is used to screen those false responses. A sample
size of 686 valid cases out of 776 was then included to construct this
scale. An independent t-test was used to examine each individual
phrase. The phrases with the highest significance are then identified
and retained to compose this scale. Result, a Taiwan Health Literacy
Scale (THLS) was finalized with 66 health-related phrases under nine
divisions. Cronbach-s alpha of each division is at a satisfactory level
of 89% and above. Conclusions, factors significantly differentiate the
levels of health literacy are education, female gender, age, family
members of stroke victims, experience with patient care, and
healthcare professionals in the initial application in this study..
Abstract: The Principal component regression (PCR) is a
combination of principal component analysis (PCA) and multiple linear regression (MLR). The objective of this paper is to revise the
use of PCR in shortwave near infrared (SWNIR) (750-1000nm) spectral analysis. The idea of PCR was explained mathematically and
implemented in the non-destructive assessment of the soluble solid
content (SSC) of pineapple based on SWNIR spectral data. PCR achieved satisfactory results in this application with root mean
squared error of calibration (RMSEC) of 0.7611 Brix°, coefficient of determination (R2) of 0.5865 and root mean squared error of crossvalidation
(RMSECV) of 0.8323 Brix° with principal components
(PCs) of 14.
Abstract: This paper is part of an ongoing research on the
development of systemic maintenance management model Malaysian
university buildings. In order to achieve this aim, there is a need to
develop a performance model against which services are measure.
Measuring performance is a significant part of maintenance
management service delivery. Maintenance organization needs to
know where they are in order to provide user-driven services and to
enhance productivity. The aim of this paper is to formulate a
template or model for university maintenance organization in
Malaysia. The model is based on literature review and survey
questionnaire and has been validated. Through grounded theory, this
paper developed a 8 points matrix for the university maintenance
organizations for measuring and improving their service delivery.
The potential of the model is guide and assists towards providing
value added service delivery through initiating maintenance
according to user value system rather than on the condition of the
building.
Abstract: In the world of Peer-to-Peer (P2P) networking
different protocols have been developed to make the resource sharing
or information retrieval more efficient. The SemPeer protocol is a
new layer on Gnutella that transforms the connections of the nodes
based on semantic information to make information retrieval more
efficient. However, this transformation causes high clustering in the
network that decreases the number of nodes reached, therefore the
probability of finding a document is also decreased. In this paper we
describe a mathematical model for the Gnutella and SemPeer
protocols that captures clustering-related issues, followed by a
proposition to modify the SemPeer protocol to achieve moderate
clustering. This modification is a sort of link management for the
individual nodes that allows the SemPeer protocol to be more
efficient, because the probability of a successful query in the P2P
network is reasonably increased. For the validation of the models, we
evaluated a series of simulations that supported our results.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: The construction of a civil structure inside a urban
area inevitably modifies the outdoor microclimate at the building
site. Wind speed, wind direction, air pollution, driving rain, radiation
and daylight are some of the main physical aspects that are subjected
to the major changes. The quantitative amount of these modifications
depends on the shape, size and orientation of the building and on its
interaction with the surrounding environment.The flow field over a
flat roof model building has been numerically investigated in order to
determine two-dimensional CFD guidelines for the calculation of the
turbulent flow over a structure immersed in an atmospheric boundary
layer. To this purpose, a complete validation campaign has been
performed through a systematic comparison of numerical simulations
with wind tunnel experimental data.Several turbulence models and
spatial node distributions have been tested for five different vertical
positions, respectively from the upstream leading edge to the
downstream bottom edge of the analyzed model. Flow field
characteristics in the neighborhood of the building model have been
numerically investigated, allowing a quantification of the capabilities
of the CFD code to predict the flow separation and the extension of
the recirculation regions.The proposed calculations have allowed the
development of a preliminary procedure to be used as a guidance in
selecting the appropriate grid configuration and corresponding
turbulence model for the prediction of the flow field over a twodimensional
roof architecture dominated by flow separation.
Abstract: Ground-source heat pumps achieve higher efficiencies
than conventional air-source heat pumps because they exchange heat
with the ground that is cooler in summer and hotter in winter than the
air environment. Earth heat exchangers are essential parts of the
ground-source heat pumps and the accurate prediction of their
performance is of fundamental importance. This paper presents the
development and validation of a numerical model through an
incompressible fluid flow, for the simulation of energy and
temperature changes in and around a U-tube borehole heat
exchanger. The FlexPDE software is used to solve the resulting
simultaneous equations that model the heat exchanger. The validated
model (through a comparison with experimental data) is then used to
extract conclusions on how various parameters like the U-tube
diameter, the variation of the ground thermal conductivity and
specific heat and the borehole filling material affect the temperature
of the fluid.
Abstract: Prediction of fault-prone modules provides one way to
support software quality engineering. Clustering is used to determine
the intrinsic grouping in a set of unlabeled data. Among various
clustering techniques available in literature K-Means clustering
approach is most widely being used. This paper introduces K-Means
based Clustering approach for software finding the fault proneness of
the Object-Oriented systems. The contribution of this paper is that it
has used Metric values of JEdit open source software for generation
of the rules for the categorization of software modules in the
categories of Faulty and non faulty modules and thereafter
empirically validation is performed. The results are measured in
terms of accuracy of prediction, probability of Detection and
Probability of False Alarms.
Abstract: Air conditioning is mainly to be used as human
comfort medium. It has been use more often in country in which the
daily temperatures are high. In scientific, air conditioning is defined
as a process of controlling the moisture, cooling, heating and cleaning
air. Without proper estimation of cooling load, big amount of waste
energy been used because of unsuitable of air conditioning system are
not considering to overcoming heat gains from surrounding. This is
due to the size of the room is too big and the air conditioning has to
use more energy to cool the room and the air conditioning is too
small for the room. The studies are basically to develop a program to
calculate cooling load. Through this study it is easy to calculate
cooling load estimation. Furthermore it-s help to compare the cooling
load estimation by hourly and yearly. Base on the last study that been
done, the developed software are not user-friendly. For individual
without proper knowledge of calculating cooling load estimation
might be problem. Easy excess and user-friendly should be the main
objective to design something. This program will allow cooling load
able be estimate by any users rather than estimation by using rule of
thumb. Several of limitation of case study is judged to sure it-s
meeting to Malaysia building specification. Finally validation is done
by comparison manual calculation and by developed program.
Abstract: Saturated hydraulic conductivity is one of the soil
hydraulic properties which is widely used in environmental studies
especially subsurface ground water. Since, its direct measurement is
time consuming and therefore costly, indirect methods such as
pedotransfer functions have been developed based on multiple linear
regression equations and neural networks model in order to estimate
saturated hydraulic conductivity from readily available soil
properties e.g. sand, silt, and clay contents, bulk density, and organic
matter. The objective of this study was to develop neural networks
(NNs) model to estimate saturated hydraulic conductivity from
available parameters such as sand and clay contents, bulk density,
van Genuchten retention model parameters (i.e. r
θ , α , and n) as well
as effective porosity. We used two methods to calculate effective
porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s
θ is
saturated water content, FC θ is water content retained at -33 kPa
matric potential, and inf θ is water content at the inflection point.
Total of 311 soil samples from the UNSODA database was divided
into three groups as 187 for the training, 62 for the validation (to
avoid over training), and 62 for the test of NNs model. A commercial
neural network toolbox of MATLAB software with a multi-layer
perceptron model and back propagation algorithm were used for the
training procedure. The statistical parameters such as correlation
coefficient (R2), and mean square error (MSE) were also used to
evaluate the developed NNs model. The best number of neurons in
the middle layer of NNs model for methods (1) and (2) were
calculated 44 and 6, respectively. The R2 and MSE values of the test
phase were determined for method (1), 0.94 and 0.0016, and for
method (2), 0.98 and 0.00065, respectively, which shows that method
(2) estimates saturated hydraulic conductivity better than method (1).
Abstract: Thermal conductivity is an important characteristic of
a nanofluid in laminar flow heat transfer. This paper presents an
improved model for the prediction of the effective thermal
conductivity of nanofluids based on dimensionless groups. The
model expresses the thermal conductivity of a nanofluid as a function
of the thermal conductivity of the solid and liquid, their volume
fractions and particle size. The proposed model includes a parameter
which accounts for the interfacial shell, brownian motion, and
aggregation of particle. The validation of the model is verified by
applying the results obtained by the experiments of Tio2-water and
Al2o3-water nanofluids.
Abstract: There are several approaches in trying to solve the
Quantitative 1Structure-Activity Relationship (QSAR) problem.
These approaches are based either on statistical methods or on
predictive data mining. Among the statistical methods, one should
consider regression analysis, pattern recognition (such as cluster
analysis, factor analysis and principal components analysis) or partial
least squares. Predictive data mining techniques use either neural
networks, or genetic programming, or neuro-fuzzy knowledge. These
approaches have a low explanatory capability or non at all. This
paper attempts to establish a new approach in solving QSAR
problems using descriptive data mining. This way, the relationship
between the chemical properties and the activity of a substance
would be comprehensibly modeled.
Abstract: Estimation of stature is an important step in developing a biological profile for human identification. It may provide a valuable indicator for unknown individual in a population. The aim of this study was to analyses the relationship between stature and lower limb dimensions in the Malaysian population. The sample comprised 100 corpses, which included 69 males and 31 females between age ranges of 20 to 90 years old. The parameters measured were stature, thigh length, lower leg length, leg length, foot length, foot height and foot breadth. Results showed that mean values in males were significantly higher than those in females (P < 0.05). There were significant correlations between lower limb dimensions and stature. Cross-validation of the equation on 100 individuals showed close approximation between known stature and estimated stature. It was concluded that lower limb dimensions were useful for estimation of stature, which should be validated in future studies.
Abstract: Functionalities and control behavior are both primary
requirements in design of a complex system. Automata theory plays
an important role in modeling behavior of a system. Z is an ideal
notation which is used for describing state space of a system and then
defining operations over it. Consequently, an integration of automata
and Z will be an effective tool for increasing modeling power for a
complex system. Further, nondeterministic finite automata (NFA)
may have different implementations and therefore it is needed to
verify the transformation from diagrams to a code. If we describe
formal specification of an NFA before implementing it, then
confidence over transformation can be increased. In this paper, we
have given a procedure for integrating NFA and Z. Complement of a
special type of NFA is defined. Then union of two NFAs is
formalized after defining their complements. Finally, formal
construction of intersection of NFAs is described. The specification
of this relationship is analyzed and validated using Z/EVES tool.
Abstract: For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new
sequence features, the frameshift sensitivity (FS) and the accumulative
penta-mer complexity (APC), were discovered which
were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments
were performed on six model organisms to test the power
of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons
and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have
great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.
Abstract: The problems associated with wind predictions of
WAsP model in complex terrain are already the target of several
studies in the last decade. In this paper, the influence of surrounding
orography on accuracy of wind data analysis of a train is
investigated. For the case study, a site with complex surrounding
orography is considered. This site is located in Manjil, one of the
windiest cities of Iran. For having precise evaluation of wind regime
in the site, one-year wind data measurements from two metrological
masts are used. To validate the obtained results from WAsP, the
cross prediction between each mast is performed. The analysis
reveals that WAsP model can estimate the wind speed behavior
accurately. In addition, results show that this software can be used
for predicting the wind regime in flat sites with complex surrounding
orography.
Abstract: Microarrays have become the effective, broadly used tools in biological and medical research to address a wide range of problems, including classification of disease subtypes and tumors. Many statistical methods are available for analyzing and systematizing these complex data into meaningful information, and one of the main goals in analyzing gene expression data is the detection of samples or genes with similar expression patterns. In this paper, we express and compare the performance of several clustering methods based on data preprocessing including strategies of normalization or noise clearness. We also evaluate each of these clustering methods with validation measures for both simulated data and real gene expression data. Consequently, clustering methods which are common used in microarray data analysis are affected by normalization and degree of noise and clearness for datasets.
Abstract: In this paper, a two-dimensional (2D) numerical
model for the tidal currents simulation in Persian Gulf is presented.
The model is based on the depth averaged equations of shallow water
which consider hydrostatic pressure distribution. The continuity
equation and two momentum equations including the effects of bed
friction, the Coriolis effects and wind stress have been solved. To
integrate the 2D equations, the Alternative Direction Implicit (ADI)
technique has been used. The base of equations discritization was
finite volume method applied on rectangular mesh. To evaluate the
model validation, a dam break case study including analytical
solution is selected and the comparison is done. After that, the
capability of the model in simulation of tidal current in a real field is
represented by modeling the current behavior in Persian Gulf. The
tidal fluctuations in Hormuz Strait have caused the tidal currents in
the area of study. Therefore, the water surface oscillations data at
Hengam Island on Hormoz Strait are used as the model input data.
The check point of the model is measured water surface elevations at
Assaluye port. The comparison between the results and the
acceptable agreement of them showed the model ability for modeling
marine hydrodynamic.
Abstract: In this paper a one-dimension Self Organizing Map
algorithm (SOM) to perform feature selection is presented. The
algorithm is based on a first classification of the input dataset on a
similarity space. From this classification for each class a set of
positive and negative features is computed. This set of features is
selected as result of the procedure. The procedure is evaluated on an
in-house dataset from a Knowledge Discovery from Text (KDT)
application and on a set of publicly available datasets used in
international feature selection competitions. These datasets come
from KDT applications, drug discovery as well as other applications.
The knowledge of the correct classification available for the training
and validation datasets is used to optimize the parameters for positive
and negative feature extractions. The process becomes feasible for
large and sparse datasets, as the ones obtained in KDT applications,
by using both compression techniques to store the similarity matrix
and speed up techniques of the Kohonen algorithm that take
advantage of the sparsity of the input matrix. These improvements
make it feasible, by using the grid, the application of the
methodology to massive datasets.