Abstract: Categorical data based on description of the
agricultural landscape imposed some mathematical and analytical
limitations. This problem however can be overcome by data
transformation through coding scheme and the use of non-parametric
multivariate approach. The present study describes data
transformation from qualitative to numerical descriptors. In a
collection of 103 random soil samples over a 60 hectare field,
categorical data were obtained from the following variables: levels of
nitrogen, phosphorus, potassium, pH, hue, chroma, value and data on
topography, vegetation type, and the presence of rocks. Categorical
data were coded, and Spearman-s rho correlation was then calculated
using PAST software ver. 1.78 in which Principal Component
Analysis was based. Results revealed successful data transformation,
generating 1030 quantitative descriptors. Visualization based on the
new set of descriptors showed clear differences among sites, and
amount of variation was successfully measured. Possible applications
of data transformation are discussed.
Abstract: In this paper, perceptions of actors on changes in
crop productivity, quantity and quality of water, and determinants of
their perception are analyzed using descriptive statistics and ordered
logit model. Data collected from 297 Ethiopian farmers and 103
agricultural professionals from December 2009 to January 2010 are
employed. Results show that the majority of the farmers and
professionals recognized decline in water resources, reasoning
climate changes and soil erosion as some of the causes. However,
there is a variation in views on changes in productivity. The
household asset, education level, age and geographical positions are
found to affect farmers- perception on changes in crop productivity.
But, the study underlines that there is no evidence that farmers-
economic status, age, or education level affects recognition of
degradation of water resources. Thus, more focus shall be given on
providing them different coping mechanisms and alternative
resource conserving technologies than educating about the
problems.
Abstract: Pharmacology curriculum plays an integral role in
medical education. Learning pharmacology to choose and prescribe
drugs is a major challenge encountered by students. We developed
pharmacology applied learning activities for first year medical
students that included realistic clinical situations with escalating
complications which required the students to analyze the situation
and think critically to choose a safe drug. Tutor feedback was
provided at the end of session. Evaluation was done to assess the
students- level of interest and usefulness of the sessions in rational
selection of drugs. Majority (98 %) of the students agreed that the
session was an extremely useful learning exercise and agreed that
similar sessions would help in rational selection of drugs. Applied
learning sessions in the early years of medical program may promote
deep learning and bridge the gap between pharmacology theory and
clinical practice. Besides, it may also enhance safe prescribing skills.
Abstract: The massive proliferation of affordable computers, Internet broadband connectivity and rich education content has created a global phenomenon in which information and communication technology (ICT) is being used to transform education. Therefore, there is a need to redesign the educational system to meet the needs better. The advent of computers with sophisticated software has made it possible to solve many complex problems very fast and at a lower cost. This paper introduces the characteristics of the current E-Learning and then analyses the concept of cloud computing and describes the architecture of cloud computing platform by combining the features of E-Learning. The authors have tried to introduce cloud computing to e-learning, build an e-learning cloud, and make an active research and exploration for it from the following aspects: architecture, construction method and external interface with the model.
Abstract: Computer worm detection is commonly performed by
antivirus software tools that rely on prior explicit knowledge of the
worm-s code (detection based on code signatures). We present an
approach for detection of the presence of computer worms based on
Artificial Neural Networks (ANN) using the computer's behavioral
measures. Identification of significant features, which describe the
activity of a worm within a host, is commonly acquired from security
experts. We suggest acquiring these features by applying feature
selection methods. We compare three different feature selection
techniques for the dimensionality reduction and identification of the
most prominent features to capture efficiently the computer behavior
in the context of worm activity. Additionally, we explore three
different temporal representation techniques for the most prominent
features. In order to evaluate the different techniques, several
computers were infected with five different worms and 323 different
features of the infected computers were measured. We evaluated
each technique by preprocessing the dataset according to each one
and training the ANN model with the preprocessed data. We then
evaluated the ability of the model to detect the presence of a new
computer worm, in particular, during heavy user activity on the
infected computers.
Abstract: Many organisations are nowadays interested to adopt
lean manufacturing strategy that would enable them to compete in
this competitive globalisation market. In this respect, it is necessary
to assess the implementation of lean manufacturing in different
organisations so that the important best practices can be identified.
This paper describes the development of key areas which will be
used to assess the adoption and implementation of lean
manufacturing practices. There are some key areas developed to
evaluate and reduce the most optimal projects so as to enhance their
production efficiency and increase the purpose of the economic
benefits of the manufacturing unit.
Lean manufacturing is becoming lean enterprise by treating its
customers and suppliers as partners. This gives the extra edge in
today-s cost and time competitive markets. The organisation is
becoming strong in all the conventional competition points. They are
Price, Quality and Delivery. Lean enterprise owners can deliver high
quality products quickly, with low price.
Abstract: The choice of finite element to use in order to predict
nonlinear static or dynamic response of complex structures becomes
an important factor. Then, the main goal of this research work is to
focus a study on the effect of the in-plane rotational degrees of
freedom in linear and geometrically non linear static and dynamic
analysis of thin shell structures by flat shell finite elements. In this
purpose: First, simple triangular and quadrilateral flat shell finite
elements are implemented in an incremental formulation based on the
updated lagrangian corotational description for geometrically
nonlinear analysis. The triangular element is a combination of DKT
and CST elements, while the quadrilateral is a combination of DKQ
and the bilinear quadrilateral membrane element. In both elements,
the sixth degree of freedom is handled via introducing fictitious
stiffness. Secondly, in the same code, the sixth degrees of freedom in
these elements is handled differently where the in-plane rotational
d.o.f is considered as an effective d.o.f in the in-plane filed
interpolation. Our goal is to compare resulting shell elements. Third,
the analysis is enlarged to dynamic linear analysis by direct
integration using Newmark-s implicit method. Finally, the linear
dynamic analysis is extended to geometrically nonlinear dynamic
analysis where Newmark-s method is used to integrate equations of
motion and the Newton-Raphson method is employed for iterating
within each time step increment until equilibrium is achieved. The
obtained results demonstrate the effectiveness and robustness of the
interpolation of the in-plane rotational d.o.f. and present deficiencies
of using fictitious stiffness in dynamic linear and nonlinear analysis.
Abstract: In this paper, the typical exponential method, diamond difference and modified time discrete scheme is researched for self adaptive time step. The second-order time evolution scheme is applied to time-dependent spherical neutron transport equation by discrete ordinates method. The numerical results show that second-order time evolution scheme associated exponential method has some good properties. The time differential curve about neutron current is more smooth than that of exponential method and diamond difference and modified time discrete scheme.
Abstract: In this paper we used data mining techniques to
identify outlier patients who are using large amount of drugs over a
long period of time. Any healthcare or health insurance system
should deal with the quantities of drugs utilized by chronic diseases
patients. In Kingdom of Bahrain, about 20% of health budget is spent
on medications. For the managers of healthcare systems, there is no
enough information about the ways of drug utilization by chronic
diseases patients, is there any misuse or is there outliers patients. In
this work, which has been done in cooperation with information
department in the Bahrain Defence Force hospital; we select the data
for Cardiac patients in the period starting from 1/1/2008 to
December 31/12/2008 to be the data for the model in this paper. We
used three techniques for finding the drug utilization for cardiac
patients. First we applied a clustering technique, followed by
measuring of clustering validity, and finally we applied a decision
tree as classification algorithm. The clustering results is divided into
three clusters according to the drug utilization, for 1603 patients, who
received 15,806 prescriptions during this period can be partitioned
into three groups, where 23 patients (2.59%) who received 1316
prescriptions (8.32%) are classified to be outliers. The classification
algorithm shows that the use of average drug utilization and the age,
and the gender of the patient can be considered to be the main
predictive factors in the induced model.
Abstract: The incidence of oral cancer in Taiwan increased year
by year. It replaced the nasopharyngeal as the top incurrence among
head and neck cancers since 1994. Early examination and earlier
identification for earlier treatment is the most effective medical
treatment for these cancers. Although the government fully subsidized
the expenses with tremendous promotion program for oral cancer
screening, the citizen-s participation remained low. Purpose of this
study is to understand the factors affecting the citizens- behavior
intensions of taking an oral cancer screening. Based on the Theory of
Planned Behavior, this study adopted four distinctive variables in
explaining the captioned behavior intentions.700 questionnaires were
dispatched with 500 valid responses or 71.4% returned by the citizens
with an age 30 or above from the eastern counties of Taiwan. Test
results has shown that attitude toward, subjective norms of, and
perceived behavioral control over the oral cancer screening varied
from some demographic factors to another. The study proofed that
attitude toward, subjective norms of, and perceived behavioral control
over the oral cancer screening had positive impacts on the
corresponding behavior intention. The test concluded that the theory
of planned behavior was appropriate as a theoretical framework in
explaining the influencing factors of intentions of taking oral cancer
screening. This study suggested the healthcare professional should
provide high accessibility of screening services other than just
delivering knowledge on oral cancer to promote the citizens-
intentions of taking the captioned screening. This research also
provided a practical implication to the healthcare professionals when
formulating and implementing promotion instruments for lifting the
screening rate of oral cancer.
Abstract: A prototype model of an emulsion separator was
designed and manufactured. Generally, it is a cylinder filled with
different fractal modules. The emulsion was fed into the reactor by a
peristaltic pump through an inlet placed at the boundary between the
two phases. For hydrodynamic design and sizing of the reactor the
assumptions of the theory of filtration were used and methods to
describe the separation process were developed. Based on this
methodology and using numerical methods and software of Autodesk
the process is simulated in different operating modes. The basic
hydrodynamic characteristics - speed and performance for different
types of fractal systems and decisions to optimize the design of the
reactor were also defined.
Abstract: Computation of facility location problem for every
location in the country is not easy simultaneously. Solving the
problem is described by using cluster computing. A technique is to
design parallel algorithm by using local search with single swap
method in order to solve that problem on clusters. Parallel
implementation is done by the use of portable parallel programming,
Message Passing Interface (MPI), on Microsoft Windows Compute
Cluster. In this paper, it presents the algorithm that used local search
with single swap method and implementation of the system of a
facility to be opened by using MPI on cluster. If large datasets are
considered, the process of calculating a reasonable cost for a facility
becomes time consuming. The result shows parallel computation of
facility location problem on cluster speedups and scales well as
problem size increases.
Abstract: Designing, implementing, and debugging concurrency
control algorithms in a real system is a complex, tedious, and errorprone
process. Further, understanding concurrency control
algorithms and distributed computations is itself a difficult task.
Visualization can help with both of these problems. Thus, we have
developed an exploratory environment in which people can prototype
and test various versions of concurrency control algorithms, study
and debug distributed computations, and view performance statistics
of distributed systems. In this paper, we describe the exploratory
environment and show how it can be used to explore concurrency
control algorithms for the interactive steering of distributed
computations.
Abstract: The aim of this article is to assess the existing
business models used by the banks operating in the CEE countries in
the time period from 2006 till 2011.
In order to obtain research results, the authors performed
qualitative analysis of the scientific literature on bank business
models, which have been grouped into clusters that consist of such
components as: 1) capital and reserves; 2) assets; 3) deposits, and 4)
loans.
In their turn, bank business models have been developed based on
the types of core activities of the banks, and have been divided into
four groups: Wholesale, Investment, Retail and Universal Banks.
Descriptive statistics have been used to analyse the models,
determining mean, minimal and maximal values of constituent
cluster components, as well as standard deviation. The analysis of
the data is based on such bank variable indices as Return on Assets
(ROA) and Return on Equity (ROE).
Abstract: 2007 is a jubilee year: in 1967, programming language SIMULA 67 was presented, which contained all aspects of what was later called object-oriented programming. The present paper contains a description of the development unto the objectoriented programming, the role of simulation in this development and other tools that appeared in SIMULA 67 and that are nowadays called super-object-oriented programming.
Abstract: Evolutionary Programming (EP) represents a
methodology of Evolutionary Algorithms (EA) in which mutation is
considered as a main reproduction operator. This paper presents a
novel EP approach for Artificial Neural Networks (ANN) learning.
The proposed strategy consists of two components: the self-adaptive,
which contains phenotype information and the dynamic, which is
described by genotype. Self-adaptation is achieved by the addition of
a value, called the network weight, which depends on a total number
of hidden layers and an average number of neurons in hidden layers.
The dynamic component changes its value depending on the fitness
of a chromosome, exposed to mutation. Thus, the mutation step size
is controlled by two components, encapsulated in the algorithm,
which adjust it according to the characteristics of a predefined ANN
architecture and the fitness of a particular chromosome. The
comparative analysis of the proposed approach and the classical EP
(Gaussian mutation) showed, that that the significant acceleration of
the evolution process is achieved by using both phenotype and
genotype information in the mutation strategy.
Abstract: A novel approach to speech coding using the hybrid architecture is presented. Advantages of parametric and perceptual coding methods are utilized together in order to create a speech coding algorithm assuring better signal quality than in traditional CELP parametric codec. Two approaches are discussed. One is based on selection of voiced signal components that are encoded using parametric algorithm, unvoiced components that are encoded perceptually and transients that remain unencoded. The second approach uses perceptual encoding of the residual signal in CELP codec. The algorithm applied for precise transient selection is described. Signal quality achieved using the proposed hybrid codec is compared to quality of some standard speech codecs.
Abstract: This paper describes the optimization of a complex
dairy farm simulation model using two quite different methods of
optimization, the Genetic algorithm (GA) and the Lipschitz
Branch-and-Bound (LBB) algorithm. These techniques have been
used to improve an agricultural system model developed by Dexcel
Limited, New Zealand, which describes a detailed representation of
pastoral dairying scenarios and contains an 8-dimensional parameter
space. The model incorporates the sub-models of pasture growth and
animal metabolism, which are themselves complex in many cases.
Each evaluation of the objective function, a composite 'Farm
Performance Index (FPI)', requires simulation of at least a one-year
period of farm operation with a daily time-step, and is therefore
computationally expensive. The problem of visualization of the
objective function (response surface) in high-dimensional spaces is
also considered in the context of the farm optimization problem.
Adaptations of the sammon mapping and parallel coordinates
visualization are described which help visualize some important
properties of the model-s output topography. From this study, it is
found that GA requires fewer function evaluations in optimization
than the LBB algorithm.
Abstract: This paper presents positive and negative full-wave
rectifier. The proposed structure is based on OTA using
commercially available ICs (LT1228). The features of the proposed
circuit are that: it can rectify and amplify voltage signal with
controllable output magnitude via input bias current: the output
voltage is free from temperature variation. The circuit description
merely consists of 1 single ended and 3 fully differential OTAs. The
performance of the proposed circuit are investigated though PSpice.
They show that the proposed circuit can function as positive/negative
full-wave rectifier, where the voltage input wide-dynamic range from
-5V to 5V. Furthermore, the output voltage is slightly dependent on
the temperature variations.
Abstract: Clustering large populations is an important problem
when the data contain noise and different shapes. A good clustering
algorithm or approach should be efficient enough to detect clusters
sensitively. Besides space complexity, time complexity also gains
importance as the size grows. Using hierarchies we developed a new
algorithm to split attributes according to the values they have and
choosing the dimension for splitting so as to divide the database
roughly into equal parts as much as possible. At each node we
calculate some certain descriptive statistical features of the data
which reside and by pruning we generate the natural clusters with a
complexity of O(n).