Abstract: In this work study the location of interface in a stirred vessel with a Concave impeller by computational fluid dynamic was presented. To modeling rotating the impeller, sliding mesh (SM) technique was used and standard k-ε model was selected for turbulence closure. Mean tangential, radial and axial velocities and also turbulent kinetic energy (k) and turbulent dissipation rate (ε) in various points of tank was investigated. Results show sensitivity of system to location of interface and radius of 7 to 10cm for interface in the vessel with existence characteristics cause to increase the accuracy of simulation.
Abstract: Studies on Simultaneous Saccharification and Fermentation (SSF) of corn flour, a major agricultural product as the substrate using starch digesting glucoamylase enzyme derived from Aspergillus niger and non starch digesting and sugar fermenting Saccharomyces cerevisiae in a batch fermentation. Experiments based on Central Composite Design (CCD) were conducted to study the effect of substrate concentration, pH, temperature, enzyme concentration on Ethanol Concentration and the above parameters were optimized using Response Surface Methodology (RSM). The optimum values of substrate concentration, pH, temperature and enzyme concentration were found to be 160 g/l, 5.5, 30°C and 50 IU respectively. The effect of inoculums age on ethanol concentration was also investigated. The corn flour solution equivalent to 16% initial starch concentration gave the highest ethanol concentration of 63.04 g/l after 48 h of fermentation at optimum conditions of pH and temperature. Monod model and Logistic model were used for growth kinetics and Leudeking – Piret model was used for product formation kinetics.
Abstract: Recurrent event data is a special type of multivariate
survival data. Dynamic and frailty models are one of the approaches
that dealt with this kind of data. A comparison between these two
models is studied using the empirical standard deviation of the
standardized martingale residual processes as a way of assessing the
fit of the two models based on the Aalen additive regression model.
Here we found both approaches took heterogeneity into account and
produce residual standard deviations close to each other both in the
simulation study and in the real data set.
Abstract: Dengue disease is an infectious vector-borne viral
disease that is commonly found in tropical and sub-tropical regions,
especially in urban and semi-urban areas, around the world and
including Malaysia. There is no currently available vaccine or
chemotherapy for the prevention or treatment of dengue disease.
Therefore prevention and treatment of the disease depend on vector
surveillance and control measures. Disease risk mapping has been
recognized as an important tool in the prevention and control
strategies for diseases. The choice of statistical model used for
relative risk estimation is important as a good model will
subsequently produce a good disease risk map. Therefore, the aim of
this study is to estimate the relative risk for dengue disease based
initially on the most common statistic used in disease mapping called
Standardized Morbidity Ratio (SMR) and one of the earliest
applications of Bayesian methodology called Poisson-gamma model.
This paper begins by providing a review of the SMR method, which
we then apply to dengue data of Perak, Malaysia. We then fit an
extension of the SMR method, which is the Poisson-gamma model.
Both results are displayed and compared using graph, tables and
maps. Results of the analysis shows that the latter method gives a
better relative risk estimates compared with using the SMR. The
Poisson-gamma model has been demonstrated can overcome the
problem of SMR when there is no observed dengue cases in certain
regions. However, covariate adjustment in this model is difficult and
there is no possibility for allowing spatial correlation between risks in
adjacent areas. The drawbacks of this model have motivated many
researchers to propose other alternative methods for estimating the
risk.
Abstract: In Multiple Sclerosis, pathological changes in the
brain results in deviations in signal intensity on Magnetic Resonance
Images (MRI). Quantitative analysis of these changes and their
correlation with clinical finding provides important information for
diagnosis. This constitutes the objective of our work. A new approach
is developed. After the enhancement of images contrast and the brain
extraction by mathematical morphology algorithm, we proceed to the
brain segmentation. Our approach is based on building statistical
model from data itself, for normal brain MRI and including clustering
tissue type. Then we detect signal abnormalities (MS lesions) as a
rejection class containing voxels that are not explained by the built
model. We validate the method on MR images of Multiple Sclerosis
patients by comparing its results with those of human expert
segmentation.
Abstract: In this paper, the full state feedback controllers
capable of regulating and tracking the speed trajectory are presented.
A fourth order nonlinear mean value model of a 448 kW turbocharged
diesel engine published earlier is used for the purpose.
For designing controllers, the nonlinear model is linearized and
represented in state-space form. Full state feedback controllers
capable of meeting varying speed demands of drivers are presented.
Main focus here is to investigate sensitivity of the controller to the
perturbations in the parameters of the original nonlinear model.
Suggested controller is shown to be highly insensitive to the
parameter variations. This indicates that the controller is likely
perform with same accuracy even after significant wear and tear of
engine due to its use for years.
Abstract: Retrieval image by shape similarity, given a template
shape is particularly challenging, owning to the difficulty to derive a
similarity measurement that closely conforms to the common
perception of similarity by humans. In this paper, a new method for the
representation and comparison of shapes is present which is based on
the shape matrix and snake model. It is scaling, rotation, translation
invariant. And it can retrieve the shape images with some missing or
occluded parts. In the method, the deformation spent by the template
to match the shape images and the matching degree is used to evaluate
the similarity between them.
Abstract: Avalanche velocity (from start to track zone) has been estimated in the present model for an avalanche which is triggered artificially by an explosive devise. The initial development of the model has been from the concept of micro-continuum theories [1], underwater explosions [2] and from fracture mechanics [3] with appropriate changes to the present model. The model has been computed for different slab depth R, slope angle θ, snow density ¤ü, viscosity μ, eddy viscosity η*and couple stress parameter η. The applicability of the present model in the avalanche forecasting has been highlighted.
Abstract: The e-government emerging concept transforms the
way in which the citizens are dealing with their governments. Thus,
the citizens can execute the intended services online anytime and
anywhere. This results in great benefits for both the governments
(reduces the number of officers) and the citizens (more flexibility and
time saving). Therefore, building a maturity model to assess the egovernment
portals becomes desired to help in the improvement
process of such portals. This paper aims at proposing an egovernment
maturity model based on the measurement of the best
practices’ presence. The main benefit of such maturity model is to
provide a way to rank an e-government portal based on the used best
practices, and also giving a set of recommendations to go to the
higher stage in the maturity model.
Abstract: The complex hybrid and nonlinear nature of many processes that are met in practice causes problems with both structure modelling and parameter identification; therefore, obtaining a model that is suitable for MPC is often a difficult task. The basic idea of this paper is to present an identification method for a piecewise affine (PWA) model based on a fuzzy clustering algorithm. First we introduce the PWA model. Next, we tackle the identification method. We treat the fuzzy clustering algorithm, deal with the projections of the fuzzy clusters into the input space of the PWA model and explain the estimation of the parameters of the PWA model by means of a modified least-squares method. Furthermore, we verify the usability of the proposed identification approach on a hybrid nonlinear batch reactor example. The result suggest that the batch reactor can be efficiently identified and thus formulated as a PWA model, which can eventually be used for model predictive control purposes.
Abstract: Design and modeling of nonlinear systems require the
knowledge of all inside acting parameters and effects. An empirical
alternative is to identify the system-s transfer function from input and
output data as a black box model. This paper presents a procedure
using least squares algorithm for the identification of a feed drive
system coefficients in time domain using a reduced model based on
windowed input and output data. The command and response of the
axis are first measured in the first 4 ms, and then least squares are
applied to predict the transfer function coefficients for this
displacement segment. From the identified coefficients, the next
command response segments are estimated. The obtained results
reveal a considerable potential of least squares method to identify the
system-s time-based coefficients and predict accurately the command
response as compared to measurements.
Abstract: Due to the stringent legislation for emission of diesel
engines and also increasing demand on fuel consumption, the
importance of detailed 3D simulation of fuel injection, mixing and
combustion have been increased in the recent years. In the present
work, FIRE code has been used to study the detailed modeling of
spray and mixture formation in a Caterpillar heavy-duty diesel
engine. The paper provides an overview of the submodels
implemented, which account for liquid spray atomization, droplet
secondary break-up, droplet collision, impingement, turbulent
dispersion and evaporation. The simulation was performed from
intake valve closing (IVC) to exhaust valve opening (EVO). The
predicted in-cylinder pressure is validated by comparing with
existing experimental data. A good agreement between the predicted
and experimental values ensures the accuracy of the numerical
predictions collected with the present work. Predictions of engine
emissions were also performed and a good quantitative agreement
between measured and predicted NOx and soot emission data were
obtained with the use of the present Zeldowich mechanism and
Hiroyasu model. In addition, the results reported in this paper
illustrate that the numerical simulation can be one of the most
powerful and beneficial tools for the internal combustion engine
design, optimization and performance analysis.
Abstract: The equilibrium, thermodynamics and kinetics of the
biosorption of Cd (II) and Pb(II) by a Spore Forming Bacillus (MGL
75) were investigated at different experimental conditions. The
Langmuir and Freundlich, and Dubinin-Radushkevich (D-R)
equilibrium adsorption models were applied to describe the
biosorption of the metal ions by MGL 75 biomass. The Langmuir
model fitted the equilibrium data better than the other models.
Maximum adsorption capacities q max for lead (II) and cadmium (II)
were found equal to 158.73mg/g and 91.74 mg/g by Langmuir model.
The values of the mean free energy determined with the D-R equation
showed that adsorption process is a physiosorption process. The
thermodynamic parameters Gibbs free energy (ΔG°), enthalpy (ΔH°),
and entropy (ΔS°) changes were also calculated, and the values
indicated that the biosorption process was exothermic and
spontaneous. Experiment data were also used to study biosorption
kinetics using pseudo-first-order and pseudo-second-order kinetic
models. Kinetic parameters, rate constants, equilibrium sorption
capacities and related correlation coefficients were calculated and
discussed. The results showed that the biosorption processes of both
metal ions followed well pseudo-second-order kinetics.
Abstract: e-Government structures permits the government to operate in a more transparent and accountable manner of which it increases the power of the individual in relation to that of the government. This paper identifies the factors that determine customer-s attitude towards e-Government services using a theoretical model based on the Technology Acceptance Model. Data relating to the constructs were collected from 200 respondents. The research model was tested using Structural Equation Modeling (SEM) techniques via the Analysis of Moment Structure (AMOS 16) computer software. SEM is a comprehensive approach to testing hypotheses about relations among observed and latent variables. The proposed model fits the data well. The results demonstrated that e- Government services acceptance can be explained in terms of compatibility and attitude towards e-Government services. The setup of the e-Government services will be compatible with the way users work and are more likely to adopt e-Government services owing to their familiarity with the Internet for various official, personal, and recreational uses. In addition, managerial implications for government policy makers, government agencies, and system developers are also discussed.
Abstract: In this study, active tendons with Proportional Integral
Derivation type controllers were applied to a SDOF and a MDOF
building model. Physical models of buildings were constituted with
virtual springs, dampers and rigid masses. After that, equations of
motion of all degrees of freedoms were obtained. Matlab Simulink
was utilized to obtain the block diagrams for these equations of
motion. Parameters for controller actions were found by using a trial
method. After earthquake acceleration data were applied to the
systems, building characteristics such as displacements, velocities,
accelerations and transfer functions were analyzed for all degrees of
freedoms. Comparisons on displacement vs. time, velocity vs. time,
acceleration vs. time and transfer function (Db) vs. frequency (Hz)
were made for uncontrolled and controlled buildings. The results
show that the method seems feasible.
Abstract: Gold passbook is an investing tool that is especially
suitable for investors to do small investment in the solid gold. The gold
passbook has the lower risk than other ways investing in gold, but its
price is still affected by gold price. However, there are many factors
can cause influences on gold price. Therefore, building a model to
predict the price of gold passbook can both reduce the risk of
investment and increase the benefits. This study investigates the
important factors that influence the gold passbook price, and utilize
the Group Method of Data Handling (GMDH) to build the predictive
model. This method can not only obtain the significant variables but
also perform well in prediction. Finally, the significant variables of
gold passbook price, which can be predicted by GMDH, are US dollar
exchange rate, international petroleum price, unemployment rate,
whole sale price index, rediscount rate, foreign exchange reserves,
misery index, prosperity coincident index and industrial index.
Abstract: Many recent high energy physics calculations
involving charm and beauty invoke wave function at the origin
(WFO) for the meson bound state. Uncertainties of charm and beauty
quark masses and different models for potentials governing these
bound states require a simple numerical algorithm for evaluation of
the WFO's for these bound states. We present a simple algorithm for
this propose which provides WFO's with high precision compared
with similar ones already obtained in the literature.
Abstract: Business process model describes process flow of a
business and can be seen as the requirement for developing a
software application. This paper discusses a BPM2CD guideline
which complements the Model Driven Architecture concept by
suggesting how to create a platform-independent software model in
the form of a UML class diagram from a business process model. An
important step is the identification of UML classes from the business
process model. A technique for object-oriented analysis called
domain analysis is borrowed and key concepts in the business
process model will be discovered and proposed as candidate classes
for the class diagram. The paper enhances this step by using ontology
search to help identify important classes for the business domain. As
ontology is a source of knowledge for a particular domain which
itself can link to ontologies of related domains, the search can give a
refined set of candidate classes for the resulting class diagram.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: This paper deals with a numerical analysis of the
transient response of composite beams with strain rate dependent
mechanical properties by use of a finite difference method. The
equations of motion based on Timoshenko beam theory are derived.
The geometric nonlinearity effects are taken into account with von
Kármán large deflection theory. The finite difference method in
conjunction with Newmark average acceleration method is applied to
solve the differential equations. A modified progressive damage
model which accounts for strain rate effects is developed based on
the material property degradation rules and modified Hashin-type
failure criteria and added to the finite difference model. The
components of the model are implemented into a computer code in
Mathematica 6. Glass/epoxy laminated composite beams with
constant and strain rate dependent mechanical properties under
dynamic load are analyzed. Effects of strain rate on dynamic
response of the beam for various stacking sequences, load and
boundary conditions are investigated.