Abstract: The usefulness of weaning foods to meet the nutrient
needs of children is well recognized, and most of them are precooked
roller dried mixtures of cereal and/or legume flours which posses a
high viscosity and bulk when reconstituted. The objective of this study
was to formulate composite weaning foods using cereals, malted
legumes and vegetable powders and analyze them for nutrients,
functional properties and sensory attributes. Selected legumes (green
gram and lentil) were germinated, dried and dehulled. Roasted wheat,
rice, carrot powder and skim milk powder also were used. All the
ingredients were mixed in different proportions to get four
formulations, made into 30% slurry and dried in roller drier. The
products were analyzed for proximate principles, mineral content,
functional and sensory qualities. The results of analysis showed
following range of constituents per 100g of formulations on dry
weight basis, protein, 18.1-18.9 g ; fat, 0.78-1.36 g ; iron, 5.09-6.53
mg; calcium, 265-310 mg. The lowest water absorption capacity was
in case of wheat green gram based and the highest was in rice lentil
based sample. Overall sensory qualities of all foods were graded as
“good" and “very good" with no significant differences. The results
confirm that formulated weaning foods were nutritionally superior,
functionally appropriate and organoleptically acceptable.
Abstract: The gel-supported precipitation (GSP) process can be
used to make spherical particles (spherules) of nuclear fuel,
particularly for very high temperature reactors (VHTR) and even for
implementing the process called SPHEREPAC. In these different
cases, the main characteristics are the sphericity of the particles to be
manufactured and the control over their grain size. Nonetheless,
depending on the specifications defined for these spherical particles,
the GSP process has intrinsic limits, particularly when fabricating
very small particles. This paper describes the use of secondary
fragmentation (water, water/PVA and uranyl nitrate) on solid
surfaces under varying temperature and vibration conditions to assess
the relevance of using this new technique to manufacture very small
spherical particles by means of a modified GSP process. The
fragmentation mechanisms are monitored and analysed, before the
trends for its subsequent optimised application are described.
Abstract: Most of the biclustering/projected clustering algorithms are based either on the Euclidean distance or correlation coefficient which capture only linear relationships. However, in many applications, like gene expression data and word-document data, non linear relationships may exist between the objects. Mutual Information between two variables provides a more general criterion to investigate dependencies amongst variables. In this paper, we improve upon our previous algorithm that uses mutual information for biclustering in terms of computation time and also the type of clusters identified. The algorithm is able to find biclusters with mixed relationships and is faster than the previous one. To the best of our knowledge, none of the other existing algorithms for biclustering have used mutual information as a similarity measure. We present the experimental results on synthetic data as well as on the yeast expression data. Biclusters on the yeast data were found to be biologically and statistically significant using GO Tool Box and FuncAssociate.
Abstract: The theory of Groebner Bases, which has recently been
honored with the ACM Paris Kanellakis Theory and Practice Award,
has become a crucial building block to computer algebra, and is
widely used in science, engineering, and computer science. It is wellknown
that Groebner bases computation is EXP-SPACE in a general
polynomial ring setting.
However, for many important applications in computer science
such as satisfiability and automated verification of hardware and
software, computations are performed in a Boolean ring. In this paper,
we give an algorithm to show that Groebner bases computation is PSPACE
in Boolean rings. We also show that with this discovery,
the Groebner bases method can theoretically be as efficient as
other methods for automated verification of hardware and software.
Additionally, many useful and interesting properties of Groebner
bases including the ability to efficiently convert the bases for different
orders of variables making Groebner bases a promising method in
automated verification.
Abstract: This paper presents a robust proportionalderivative
(PD) based cerebellar model articulation
controller (CMAC) for vertical take-off and landing flight
control systems. Successful on-line training and recalling
process of CMAC accompanying the PD controller is
developed. The advantage of the proposed method is mainly
the robust tracking performance against aerodynamic
parametric variation and external wind gust. The
effectiveness of the proposed algorithm is validated through
the application of a vertical takeoff and landing aircraft
control system.
Abstract: In this paper, the effect of transmission codes on the
performance of coherent square M-ary quadrature amplitude
modulation (CSMQAM) under hybrid selection/maximal-ratio
combining (H-S/MRC) diversity is analysed. The fading channels are
modeled as frequency non-selective slow independent and identically
distributed Rayleigh fading channels corrupted by additive white
Gaussian noise (AWGN). The results for coded MQAM are
computed numerically for the case of (24,12) extended Golay code
and compared with uncoded MQAM under H-S/MRC diversity by
plotting error probabilities versus average signal to noise ratio (SNR)
for various values L and N in order to examine the improvement in
the performance of the digital communications system as the number
of selected diversity branches is increased. The results for no
diversity, conventional SC and Lth order MRC schemes are also
plotted for comparison. Closed form analytical results derived in this
paper are sufficiently simple and therefore can be computed
numerically without any approximations. The analytical results
presented in this paper are expected to provide useful information
needed for design and analysis of digital communication systems
over wireless fading channels.
Abstract: The number of the companies accepting RFID in Korea
has been increased continuously due to the domestic development of
information technology. The acceptance of RFID by companies in
Korea enabled them to do business with many global enterprises in a
much more efficient and effective way. According to a survey[33,
p76], many companies in Korea have used RFID for inventory or
distribution manages. But, the use of RFID in the companies in Korea
is in the early stages and its potential value hasn-t fully been realized
yet. At this time, it would be very important to investigate the factors
that affect RFID acceptance. For this study, many previous studies
were referenced and some RFID experts were interviewed. Through
the pilot test, four factors were selected - Security Trust, Employee
Knowledge, Partner Influence, Service Provider Trust - affecting
RFID acceptance and an extended technology acceptance
model(e-TAM) was presented with those factors. The proposed model
was empirically tested using data collected from employees in
companies or public enterprises. In order to analyze some
relationships between exogenous variables and four variables in TAM,
structural equation modeling(SEM) was developed and SPSS12.0 and
AMOS 7.0 were used for analyses. The results are summarized as
follows: 1) security trust perceived by employees positively
influences on perceived usefulness and perceived ease of use; 2)
employee-s knowledge on RFID positively influences on only
perceived ease of use; 3) a partner-s influence for RFID acceptance
positively influences on only perceived usefulness; 4) service provider
trust very positively influences on perceived usefulness and perceived
ease of use 5) the relationships between TAM variables are the same as
the previous studies.
Abstract: This article investigated the validity of C-test and Cloze test which purport to measure general English proficiency. To provide empirical evidence pertaining to the validity of the interpretations based on the results of these integrative language tests, their criterion-related validity was investigated. In doing so, the test of English as a foreign language (TOEFL) which is an established, standardized, and internationally administered test of general English proficiency was used as the criterion measure. Some 90 Iranian English majors participated in this study. They were seniors studying English at a university in Tehran, Iran. The results of analyses showed that there is a statistically significant correlation among participants- scores on Cloze test, C-test, and the TOEFL. Building on the findings of the study and considering criterion-related validity as the evidential basis of the validity argument, it was cautiously deducted that these tests measure the same underlying trait. However, considering the limitations of using criterion measures to validate tests, no absolute claims can be made as to the construct validity of these integrative tests.
Abstract: School experiences, family bonding and self-concept
had always been a crucial factor in influencing all aspects of a
student-s development. The purpose of this study is to develop and to
validate a priori model of self-concept among students. The study
was tested empirically using Structural Equation Modeling (SEM)
and Confirmatory Factor Analysis (CFA) to validate the structural
model. To address these concerns, 1167 students were randomly
selected and utilized the Cognitive Psycho-Social University of
Malaya instrument (2009).Resulted demonstrated there is indirect
effect from family bonding to self-concept through school
experiences among secondary school students as a mediator. Besides
school experiences, there is a direct effect from family bonding to
self-concept and family bonding to school experiences among
students.
Abstract: Traditional principal components analysis (PCA)
techniques for face recognition are based on batch-mode training
using a pre-available image set. Real world applications require that
the training set be dynamic of evolving nature where within the
framework of continuous learning, new training images are
continuously added to the original set; this would trigger a costly
continuous re-computation of the eigen space representation via
repeating an entire batch-based training that includes the old and new
images. Incremental PCA methods allow adding new images and
updating the PCA representation. In this paper, two incremental
PCA approaches, CCIPCA and IPCA, are examined and compared.
Besides, different learning and testing strategies are proposed and
applied to the two algorithms. The results suggest that batch PCA is
inferior to both incremental approaches, and that all CCIPCAs are
practically equivalent.
Abstract: Cosmic showers, from their places of origin in space,
after entering earth generate secondary particles called Extensive Air
Shower (EAS). Detection and analysis of EAS and similar High
Energy Particle Showers involve a plethora of experimental setups
with certain constraints for which soft-computational tools like
Artificial Neural Network (ANN)s can be adopted. The optimality
of ANN classifiers can be enhanced further by the use of Multiple
Classifier System (MCS) and certain data - dimension reduction
techniques. This work describes the performance of certain data
dimension reduction techniques like Principal Component Analysis
(PCA), Independent Component Analysis (ICA) and Self Organizing
Map (SOM) approximators for application with an MCS formed
using Multi Layer Perceptron (MLP), Recurrent Neural Network
(RNN) and Probabilistic Neural Network (PNN). The data inputs are
obtained from an array of detectors placed in a circular arrangement
resembling a practical detector grid which have a higher dimension
and greater correlation among themselves. The PCA, ICA and SOM
blocks reduce the correlation and generate a form suitable for real
time practical applications for prediction of primary energy and
location of EAS from density values captured using detectors in a
circular grid.
Abstract: Nowadays, ontologies are the only widely accepted paradigm for the management of sharable and reusable knowledge in a way that allows its automatic interpretation. They are collaboratively created across the Web and used to index, search and annotate documents. The vast majority of the ontology based approaches, however, focus on indexing texts at document level. Recently, with the advances in ontological engineering, it became clear that information indexing can largely benefit from the use of general purpose ontologies which aid the indexing of documents at word level. This paper presents a concept indexing algorithm, which adds ontology information to words and phrases and allows full text to be searched, browsed and analyzed at different levels of abstraction. This algorithm uses a general purpose ontology, OntoRo, and an ontologically tagged corpus, OntoCorp, both developed for the purpose of this research. OntoRo and OntoCorp are used in a two-stage supervised machine learning process aimed at generating ontology tagging rules. The first experimental tests show a tagging accuracy of 78.91% which is encouraging in terms of the further improvement of the algorithm.
Abstract: Previous studies on political budget cycles (PBCs)
implicitly assume the executive has full discretion power over fiscal
policy, neglecting the role of checks and balances of the legislature.
This paper goes beyond traditional PBCs models and sheds light on
the case study of Japan, South Korea, and Taiwan over the 1988-2007
periods. Based on the results, we find no evidence of electoral impacts
on the public expenditures in South Korean and Taiwan's
congressional elections. We also noted that PBCs are found on
Taiwan-s government expenditures during our sample periods.
Furthermore, the results also show that Japan-s legislature has a
significant checks and balances on government-s expenditures.
However, empirical results show that the legislature veto player in
Taiwan neither has effect on the reduction of public expenditures, nor
has the moderating effect over Taiwan-s political budget cycles, albeit
that they are statistically insignificant.We suggest that the existence of
PBCs in Taiwan is due to a weaker systemof checks and balances. Our
conjecture is that Taiwan either has no legislative veto player or has
observed low compliance to the law during the time period examined
in our study.
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper proposes a simple yet very interesting
relationship between the minimum direct and indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of direct and indirect jerks are found using the dynamic optimization
methods together with the numerical approximation. This is to allow
us to simulate and compare visually and statistically the time history
of control inputs employed by minimum direct and indirect jerk
designs. By considering minimum indirect jerk problem, the
numerical solution becomes much easier and yields to the similar
results as minimum direct jerk problem.
Abstract: In recent years, fast neural networks for object/face detection have been introduced based on cross correlation in the frequency domain between the input matrix and the hidden weights of neural networks. In our previous papers [3,4], fast neural networks for certain code detection was introduced. It was proved in [10] that for fast neural networks to give the same correct results as conventional neural networks, both the weights of neural networks and the input matrix must be symmetric. This condition made those fast neural networks slower than conventional neural networks. Another symmetric form for the input matrix was introduced in [1-9] to speed up the operation of these fast neural networks. Here, corrections for the cross correlation equations (given in [13,15,16]) to compensate for the symmetry condition are presented. After these corrections, it is proved mathematically that the number of computation steps required for fast neural networks is less than that needed by classical neural networks. Furthermore, there is no need for converting the input data into symmetric form. Moreover, such new idea is applied to increase the speed of neural networks in case of processing complex values. Simulation results after these corrections using MATLAB confirm the theoretical computations.
Abstract: In this paper, an analytical approach is used to study the coupled lateral-torsional vibrations of laminated composite beam. It is known that in such structures due to the fibers orientation in various layers, any lateral displacement will produce a twisting moment. This phenomenon is modeled by the bending-twisting material coupling rigidity and its main feature is the coupling of lateral and torsional vibrations. In addition to the material coupling, the effects of shear deformation and rotary inertia are taken into account in the definition of the potential and kinetic energies. Then, the governing differential equations are derived using the Hamilton-s principle and the mathematical model matches the Timoshenko beam model when neglecting the effect of bending-twisting rigidity. The equations of motion which form a system of three coupled PDEs are solved analytically to study the free vibrations of the beam in lateral and rotational modes due to the bending, as well as the torsional mode caused by twisting. The analytic solution is carried out in three steps: 1) assuming synchronous motion for the kinematic variables which are the lateral, rotational and torsional displacements, 2) solving the ensuing eigenvalue problem which contains three coupled second order ODEs and 3) imposing different boundary conditions related to combinations of simply, clamped and free end conditions. The resulting natural frequencies and mode shapes are compared with similar results in the literature and good agreement is achieved.
Abstract: In order to maximize efficiency of an information management platform and to assist in decision making, the collection, storage and analysis of performance-relevant data has become of fundamental importance. This paper addresses the merits and drawbacks provided by the OLAP paradigm for efficiently navigating large volumes of performance measurement data hierarchically. The system managers or database administrators navigate through adequately (re)structured measurement data aiming to detect performance bottlenecks, identify causes for performance problems or assessing the impact of configuration changes on the system and its representative metrics. Of particular importance is finding the root cause of an imminent problem, threatening availability and performance of an information system. Leveraging OLAP techniques, in contrast to traditional static reporting, this is supposed to be accomplished within moderate amount of time and little processing complexity. It is shown how OLAP techniques can help improve understandability and manageability of measurement data and, hence, improve the whole Performance Analysis process.
Abstract: In this report we present a rule-based approach to
detect anomalous telephone calls. The method described here uses
subscriber usage CDR (call detail record) data sampled over two
observation periods: study period and test period. The study period
contains call records of customers- non-anomalous behaviour.
Customers are first grouped according to their similar usage
behaviour (like, average number of local calls per week, etc). For
customers in each group, we develop a probabilistic model to describe
their usage. Next, we use maximum likelihood estimation (MLE) to
estimate the parameters of the calling behaviour. Then we determine
thresholds by calculating acceptable change within a group. MLE is
used on the data in the test period to estimate the parameters of the
calling behaviour. These parameters are compared against thresholds.
Any deviation beyond the threshold is used to raise an alarm. This
method has the advantage of identifying local anomalies as compared
to techniques which identify global anomalies. The method is tested
for 90 days of study data and 10 days of test data of telecom
customers. For medium to large deviations in the data in test window,
the method is able to identify 90% of anomalous usage with less than
1% false alarm rate.
Abstract: This paper presents an integrated model that
automatically measures the change of rivers, damage area of bridge
surroundings, and change of vegetation. The proposed model is on the
basis of a neurofuzzy mechanism enhanced by SOM optimization
algorithm, and also includes three functions to deal with river imagery.
High resolution imagery from FORMOSAT-2 satellite taken before
and after the invasion period is adopted. By randomly selecting a
bridge out of 129 destroyed bridges, the recognition results show that
the average width has increased 66%. The ruined segment of the
bridge is located exactly at the most scour region. The vegetation
coverage has also reduced to nearly 90% of the original. The results
yielded from the proposed model demonstrate a pinpoint accuracy rate
at 99.94%. This study brings up a successful tool not only for
large-scale damage assessment but for precise measurement to
disasters.
Abstract: Leave of absence is important in maintaining a good
status of human resource quality. Allowing the employees temporarily
free from the routine assignments can vitalize the workers- morality
and productivity. This is particularly critical to secure a satisfactory
service quality for healthcare professionals of which were typically
featured with labor intensive and complicated works to perform. As
one of the veteran hospitals that were found and operated by the
Veteran Department of Taiwan, the nursing staff of the case hospital
was squeezed to an extreme minimum level under the pressure of a
tight budgeting. Leave of absence on schedule became extremely
difficult, especially for the intensive care units (ICU), in which
required close monitoring over the cared patients, and that had more
easily driven the ICU nurses nervous. Even worse, the deferred leaves
were more than 10 days at any time in the ICU because of a fluctuating
occupancy. As a result, these had brought a bad setback to this
particular nursing team, and consequently defeated the job
performance and service quality. To solve this problem and
accordingly to strengthen their morality, a project team was organized
across different departments specific for this. Sufficient information
regarding jobs and positions requirements, labor resources, and actual
working hours in detail were collected and analyzed in the team
meetings. Several alternatives were finalized. These included job
rotating, job combination, leave on impromptu and cross-departmental
redeployment. Consequently, the deferred leave days sharply reduced
70% to a level of 3 or less days. This improvement had not only
provided good shelter for the ICU nurses that improved their job
performance and patient safety but also encouraged the nurses active
participating of a project and learned the skills of solving problems
with colleagues.