Abstract: In this work, simulation algorithms for contact drying
of agitated particulate materials under vacuum and at atmospheric
pressure were developed. The implementation of algorithms gives a
predictive estimation of drying rate curves and bulk bed temperature
during contact drying. The calculations are based on the penetration
model to describe the drying process, where all process parameters
such as heat and mass transfer coefficients, effective bed properties,
gas and liquid phase properties are estimated with proper
correlations. Simulation results were compared with experimental
data from the literature. In both cases, simulation results were in good
agreement with experimental data. Few deviations were identified
and the limitations of the predictive capabilities of the models are
discussed. The programs give a good insight of the drying behaviour
of the analysed powders.
Abstract: This paper focuses on the development of bond graph
dynamic model of the mechanical dynamics of an excavating mechanism
previously designed to be used with small tractors, which are
fabricated in the Engineering Workshops of Jomo Kenyatta University
of Agriculture and Technology. To develop a mechanical dynamics
model of the manipulator, forward recursive equations similar to
those applied in iterative Newton-Euler method were used to obtain
kinematic relationships between the time rates of joint variables
and the generalized cartesian velocities for the centroids of the
links. Representing the obtained kinematic relationships in bondgraphic
form, while considering the link weights and momenta as
the elements led to a detailed bond graph model of the manipulator.
The bond graph method was found to reduce significantly the number
of recursive computations performed on a 3 DOF manipulator for a
mechanical dynamic model to result, hence indicating that bond graph
method is more computationally efficient than the Newton-Euler
method in developing dynamic models of 3 DOF planar manipulators.
The model was verified by comparing the joint torque expressions
of a two link planar manipulator to those obtained using Newton-
Euler and Lagrangian methods as analyzed in robotic textbooks. The
expressions were found to agree indicating that the model captures
the aspects of rigid body dynamics of the manipulator. Based on
the model developed, actuator sizing and valve sizing methodologies
were developed and used to obtain the optimal sizes of the pistons
and spool valve ports respectively. It was found that using the pump
with the sized flow rate capacity, the engine of the tractor is able to
power the excavating mechanism in digging a sandy-loom soil.
Abstract: 98% of the energy needed in Taiwan has been
imported. The prices of petroleum and electricity have been
increasing. In addition, facility capacity, amount of electricity
generation, amount of electricity consumption and number of Taiwan
Power Company customers have continued to increase. For these
reasons energy conservation has become an important topic. In the
past linear regression was used to establish the power consumption
models for chillers. In this study, grey prediction is used to evaluate
the power consumption of a chiller so as to lower the total power
consumption at peak-load (so that the relevant power providers do not
need to keep on increasing their power generation capacity and facility
capacity).
In grey prediction, only several numerical values (at least four
numerical values) are needed to establish the power consumption
models for chillers. If PLR, the temperatures of supply chilled-water
and return chilled-water, and the temperatures of supply cooling-water
and return cooling-water are taken into consideration, quite accurate
results (with the accuracy close to 99% for short-term predictions)
may be obtained. Through such methods, we can predict whether the
power consumption at peak-load will exceed the contract power
capacity signed by the corresponding entity and Taiwan Power
Company. If the power consumption at peak-load exceeds the power
demand, the temperature of the supply chilled-water may be adjusted
so as to reduce the PLR and hence lower the power consumption.
Abstract: In recent methodological articles related to structural equation modeling (SEM), the question of how to measure endogenous formative variables has been raised as an urgent, unresolved issue. This research presents an empirical application from the CRM system development context to test a recently developed technique, which makes it possible to measure endogenous formative constructs in structural models. PLS path modeling is used to demonstrate the feasibility of measuring antecedent relationships at the formative indicator level, not the formative construct level. Empirical results show that this technique is a promising approach to measure antecedent relationships of formative constructs in SEM.
Abstract: Software organizations are constantly looking for
better solutions when designing and using well-defined software
processes for the development of their products and services.
However, while the technical aspects are virtually easier to arrange,
many software development processes lack more support on project
management issues. When adopting such processes, an organization
needs to apply good project management skills along with technical
views provided by those models. This research proposes the
definition of a new model that integrates the concepts of PMBOK
and those available on the OPEN metamodel, helping not only
process integration but also building the steps towards a more
comprehensive and automatable model.
Abstract: Droplet size distributions in the cold spray of a fuel
are important in observed combustion behavior. Specification of
droplet size and velocity distributions in the immediate downstream
of injectors is also essential as boundary conditions for advanced
computational fluid dynamics (CFD) and two-phase spray transport
calculations. This paper describes the development of a new model to
be incorporated into maximum entropy principle (MEP) formalism
for prediction of droplet size distribution in droplet formation region.
The MEP approach can predict the most likely droplet size and
velocity distributions under a set of constraints expressing the
available information related to the distribution.
In this article, by considering the mechanisms of turbulence
generation inside the nozzle and wave growth on jet surface, it is
attempted to provide a logical framework coupling the flow inside the
nozzle to the resulting atomization process. The purpose of this paper
is to describe the formulation of this new model and to incorporate it
into the maximum entropy principle (MEP) by coupling sub-models
together using source terms of momentum and energy. Comparison
between the model prediction and experimental data for a gas turbine
swirling nozzle and an annular spray indicate good agreement
between model and experiment.
Abstract: In this paper, penalized power-divergence test statistics have been defined and their exact size properties to test a nested sequence of log-linear models have been compared with ordinary power-divergence test statistics for various penalization, λ and main effect values. Since the ordinary and penalized power-divergence test statistics have the same asymptotic distribution, comparisons have been only made for small and moderate samples. Three-way contingency tables distributed according to a multinomial distribution have been considered. Simulation results reveal that penalized power-divergence test statistics perform much better than their ordinary counterparts.
Abstract: Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Abstract: In rail vehicles, air springs are very important isolating component, which guarantee good ride comfort for passengers during their trip. In the most new rail–vehicle models, developed by researchers, the thermo–dynamical effects of air springs are ignored and secondary suspension is modeled by simple springs and dampers. As the performance of suspension components have significant effects on rail–vehicle dynamics and ride comfort of passengers, a complete nonlinear thermo–dynamical air spring model, which is a combination of two different models, is introduced. Result from field test shows remarkable agreement between proposed model and experimental data. Effects of air suspension parameters on the system performances are investigated here and then these parameters are tuned to minimize Sperling ride comfort index during the trip. Results showed that by modification of air suspension parameters, passengers comfort is improved and ride comfort index is reduced about 10%.
Abstract: The human head representations usually are based on
the morphological – structural components of a real model. Over the
time became more and more necessary to achieve full virtual models
that comply very rigorous with the specifications of the human
anatomy. Still, making and using a model perfectly fitted with the
real anatomy is a difficult task, because it requires large hardware
resources and significant times for processing. That is why it is
necessary to choose the best compromise solution, which keeps the
right balance between the details perfection and the resources
consumption, in order to obtain facial animations with real-time
rendering. We will present here the way in which we achieved such a
3D system that we intend to use as a base point in order to create
facial animations with real-time rendering, used in medicine to find
and to identify different types of pathologies.
Abstract: Covering-based rough sets is an extension of rough
sets and it is based on a covering instead of a partition of the
universe. Therefore it is more powerful in describing some practical
problems than rough sets. However, by extending the rough sets,
covering-based rough sets can increase the roughness of each model
in recognizing objects. How to obtain better approximations from
the models of a covering-based rough sets is an important issue.
In this paper, two concepts, determinate elements and indeterminate
elements in a universe, are proposed and given precise definitions
respectively. This research makes a reasonable refinement of the
covering-element from a new viewpoint. And the refinement may
generate better approximations of covering-based rough sets models.
To prove the theory above, it is applied to eight major coveringbased
rough sets models which are adapted from other literature.
The result is, in all these models, the lower approximation increases
effectively. Correspondingly, in all models, the upper approximation
decreases with exceptions of two models in some special situations.
Therefore, the roughness of recognizing objects is reduced. This
research provides a new approach to the study and application of
covering-based rough sets.
Abstract: The purpose of this paper is to study Database Models
to use them efficiently in E-commerce websites. In this paper we are
going to find a method which can save and retrieve information in Ecommerce
websites. Thus, semantic web applications can work with,
and we are also going to study different technologies of E-commerce
databases and we know that one of the most important deficits in
semantic web is the shortage of semantic data, since most of the
information is still stored in relational databases, we present an
approach to map legacy data stored in relational databases into the
Semantic Web using virtually any modern RDF query language, as
long as it is closed within RDF. To achieve this goal we study XML
structures for relational data bases of old websites and eventually we
will come up one level over XML and look for a map from relational
model (RDM) to RDF. Noting that a large number of semantic webs
get advantage of relational model, opening the ways which can be
converted to XML and RDF in modern systems (semantic web) is
important.
Abstract: Nowadays, the increase of human population every
year results in increasing of water usage and demand. Saen Saep
canal is important canal in Bangkok. The main objective of this study
is using Artificial Neural Network (ANN) model to estimate the
Chemical Oxygen Demand (COD) on data from 11 sampling sites.
The data is obtained from the Department of Drainage and Sewerage,
Bangkok Metropolitan Administration, during 2007-2011. The
twelve parameters of water quality are used as the input of the
models. These water quality indices affect the COD. The
experimental results indicate that the ANN model provides a high
correlation coefficient (R=0.89).
Abstract: Co-integration models the long-term, equilibrium relationship of two or more related financial variables. Even if cointegration is found, in the short run, there may be deviations from the long run equilibrium relationship. The aim of this work is to forecast these deviations using neural networks and create a trading strategy based on them. A case study is used: co-integration residuals from Australian Bank Bill futures are forecast and traded using various exogenous input variables combined with neural networks. The choice of the optimal exogenous input variables chosen for each neural network, undertaken in previous work [1], is validated by comparing the forecasts and corresponding profitability of each, using a trading strategy.
Abstract: The volume of XML data exchange is explosively increasing, and the need for efficient mechanisms of XML data management is vital. Many XML storage models have been proposed for storing XML DTD-independent documents in relational database systems. Benchmarking is the best way to highlight pros and cons of different approaches. In this study, we use a common benchmarking scheme, known as XMark to compare the most cited and newly proposed DTD-independent methods in terms of logical reads, physical I/O, CPU time and duration. We show the effect of Label Path, extracting values and storing in another table and type of join needed for each method's query answering.
Abstract: The scale, complexity and worldwide geographical
spread of the LHC computing and data analysis problems are
unprecedented in scientific research. The complexity of processing
and accessing this data is increased substantially by the size and
global span of the major experiments, combined with the limited
wide area network bandwidth available. We present the latest
generation of the MONARC (MOdels of Networked Analysis at
Regional Centers) simulation framework, as a design and modeling
tool for large scale distributed systems applied to HEP experiments.
We present simulation experiments designed to evaluate the
capabilities of the current real-world distributed infrastructure to
support existing physics analysis processes and the means by which
the experiments bands together to meet the technical challenges
posed by the storage, access and computing requirements of LHC
data analysis within the CMS experiment.
Abstract: Not all types of mobile phone are successful in entering the market because some types of the mobile phone have a negative perception of user. Therefore, it is important to understand the influence of mobile phone's characteristics in the local user perception. This research investigates the influence of QWERTY mobile phone's forms in the perception of Indonesian user. First, some alternatives of mobile phone-s form are developed based on a certain number of mobile phone's models. At the second stage, some word pairs as design attributes of the mobile phone are chosen to represent the user perception of mobile phone. At the final stage, a survey is conducted to investigate the influence of the developed form alternatives to the user perception. Based on the research, users perceive mobile phone's form with curved top and straight bottom shapes and mobile phone's form with slider and antenna as the most negative form. Meanwhile, mobile phone's form with curved top and bottom shapes and mobile phone-s form without slider and antenna are perceived by the user as the most positive form.
Abstract: Some quality control tools use non metric subjective information coming from experts, who qualify the intensity of relations existing inside processes, but without quantifying them. In this paper we have developed a quality control analytic tool, measuring the impact or strength of the relationship between process operations and product characteristics. The tool includes two models: a qualitative model, allowing relationships description and analysis; and a formal quantitative model, by means of which relationship quantification is achieved. In the first one, concepts from the Graphs Theory were applied to identify those process elements which can be sources of variation, that is, those quality characteristics or operations that have some sort of prelacy over the others and that should become control items. Also the most dependent elements can be identified, that is those elements receiving the effects of elements identified as variation sources. If controls are focused in those dependent elements, efficiency of control is compromised by the fact that we are controlling effects, not causes. The second model applied adapts the multivariate statistical technique of Covariance Structural Analysis. This approach allowed us to quantify the relationships. The computer package LISREL was used to obtain statistics and to validate the model.
Abstract: This study is concerned with the investigation of the
suitability of several empirical and semi-empirical drying models
available in the literature to define drying behavior of viscose yarn
bobbins. For this purpose, firstly, experimental drying behaviour of
viscose bobbins was determined on an experimental dryer setup
which was designed and manufactured based on hot-air bobbin
dryers used in textile industry. Afterwards, drying models considered
were fitted to the experimentally obtained moisture ratios. Drying
parameters were drying temperature and bobbin diameter. The fit
was performed by selecting the values for constants in the models in
such a way that these values make the sum of the squared differences
between the experimental and the model results for moisture ratio
minimum. Suitability of fitting was specified as comparing the
correlation coefficient, standard error and mean square deviation.
The results show that the most appropriate model in describing the
drying curves of viscose bobbins is the Page model.
Abstract: Spare parts inventory management is one of the major
areas of inventory research. Analysis of recent literature showed that
an approach integrating spare parts classification, demand
forecasting, and stock control policies is essential; however, adapting
this integrated approach is limited. This work presents an integrated
framework for spare part inventory management and an Excel based
application developed for the implementation of the proposed
framework. A multi-criteria analysis has been used for spare
classification. Forecasting of spare parts- intermittent demand has
been incorporated into the application using three different
forecasting models; namely, normal distribution, exponential
smoothing, and Croston method. The application is also capable of
running with different inventory control policies. To illustrate the
performance of the proposed framework and the developed
application; the framework is applied to different items at a service
organization. The results achieved are presented and possible areas
for future work are highlighted.