Abstract: Xanthan gum is one of the major commercial
biopolymers. Due to its excellent rheological properties xanthan gum
is used in many applications, mainly in food industry. Commercial
production of xanthan gum uses glucose as the carbon substrate;
consequently the price of xanthan production is high. One of the
ways to decrease xanthan price, is using cheaper substrate like
agricultural wastes. Iran is one of the biggest date producer countries.
However approximately 50% of date production is wasted annually.
The goal of this study is to produce xanthan gum from waste date
using Xanthomonas campestris PTCC1473 by submerged
fermentation. In this study the effect of three variables including
phosphor and nitrogen amount and agitation rate in three levels using
response surface methodology (RSM) has been studied. Results
achieved from statistical analysis Design Expert 7.0.0 software
showed that xanthan increased with increasing level of phosphor.
Low level of nitrogen leaded to higher xanthan production. Xanthan
amount, increasing agitation had positive influence. The statistical
model identified the optimum conditions nitrogen amount=3.15g/l,
phosphor amount=5.03 g/l and agitation=394.8 rpm for xanthan. To
model validation, experiments in optimum conditions for xanthan
gum were carried out. The mean of result for xanthan was 6.72±0.26.
The result was closed to the predicted value by using RSM.
Abstract: Incompressible Navier-Stokes equations are reviewed
in this work. Three-dimensional Navier-Stokes equations are solved
analytically. The Mathematical derivation shows that the solutions
for the zero and constant pressure gradients are similar. Descriptions
of the proposed formulation and validation against two laminar
experiments and three different turbulent flow cases are reported in
this paper. Even though, the analytical solution is derived for nonreacting
flows, it could reproduce trends for cases including
combustion.
Abstract: This paper proposes a methodology for analysis of
the dynamic behavior of a robotic manipulator in continuous
time. Initially this system (nonlinear system) will be decomposed
into linear submodels and analyzed in the context of the Linear
and Parameter Varying (LPV) Systems. The obtained linear
submodels, which represent the local dynamic behavior of the
robotic manipulator in some operating points were grouped in
a Takagi-Sugeno fuzzy structure. The obtained fuzzy model was
analyzed and validated through analog simulation, as universal
approximator of the robotic manipulator.
Abstract: Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.
Abstract: Passive systems were born with the purpose of the
greatest exploitation of solar energy in cold climates and high
altitudes. They spread themselves until the 80-s all over the world
without any attention to the specific climate and the summer
behavior; this caused the deactivation of the systems due to a series
of problems connected to the summer overheating, the complex
management and the rising of the dust.
Until today the European regulation limits only the winter
consumptions without any attention to the summer behavior but, the
recent European EN 15251 underlines the relevance of the indoor
comfort, and the necessity of the analytic studies validation by
monitoring case studies.
In the porpose paper we demonstrate that the solar wall is an
efficient system both from thermal comfort and energy saving point
of view and it is the most suitable for our temperate climates because
it can be used as a passive cooling sistem too. In particular the paper
present an experimental and numerical analisys carried out on a case
study with nine different solar passive systems in Ancona, Italy.
We carried out a detailed study of the lodging provided by the
solar wall by the monitoring and the evaluation of the indoor
conditions.
Analyzing the monitored data, on the base of recognized models
of comfort (ISO, ASHRAE, Givoni-s BBCC), is emerged that the
solar wall has an optimal behavior in the middle seasons. In winter
phase this passive system gives more advantages in terms of energy
consumptions than the other systems, because it gives greater heat
gain and therefore smaller consumptions. In summer, when outside
air temperature return in the mean seasonal value, the indoor comfort
is optimal thanks to an efficient transversal ventilation activated from
the same wall.
Abstract: One major difficulty that faces developers of
concurrent and distributed software is analysis for concurrency based
faults like deadlocks. Petri nets are used extensively in the
verification of correctness of concurrent programs. ECATNets are a
category of algebraic Petri nets based on a sound combination of
algebraic abstract types and high-level Petri nets. ECATNets have
'sound' and 'complete' semantics because of their integration in
rewriting logic and its programming language Maude. Rewriting
logic is considered as one of very powerful logics in terms of
description, verification and programming of concurrent systems We
proposed previously a method for translating Ada-95 tasking
programs to ECATNets formalism (Ada-ECATNet) and we showed
that ECATNets formalism provides a more compact translation for
Ada programs compared to the other approaches based on simple
Petri nets or Colored Petri nets. We showed also previously how the
ECATNet formalism offers to Ada many validation and verification
tools like simulation, Model Checking, accessibility analysis and
static analysis. In this paper, we describe the implementation of our
translation of the Ada programs into ECATNets.
Abstract: Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.
Abstract: Web applications have become complex and crucial for many firms, especially when combined with areas such as CRM (Customer Relationship Management) and BPR (Business Process Reengineering). The scientific community has focused attention to Web application design, development, analysis, testing, by studying and proposing methodologies and tools. Static and dynamic techniques may be used to analyze existing Web applications. The use of traditional static source code analysis may be very difficult, for the presence of dynamically generated code, and for the multi-language nature of the Web. Dynamic analysis may be useful, but it has an intrinsic limitation, the low number of program executions used to extract information. Our reverse engineering analysis, used into our WAAT (Web Applications Analysis and Testing) project, applies mutational techniques in order to exploit server side execution engines to accomplish part of the dynamic analysis. This paper studies the effects of mutation source code analysis applied to Web software to build application models. Mutation-based generated models may contain more information then necessary, so we need a pruning mechanism.
Abstract: This study explores how the mechanics of learning
paves the way to engineering innovation. Theories related to learning
in the new product/service innovation are reviewed from an
organizational perspective, behavioral perspective, and engineering
perspective. From this, an engineering team-s external interactions
for knowledge brokering and internal composition for skill balance
are examined from a learning and innovation viewpoints. As a result,
an integrated learning model is developed by reconciling the
theoretical perspectives as well as developing propositions that
emphasize the centrality of learning, and its drivers, in the
engineering product/service development. The paper also provides a
review and partial validation of the propositions using the results of a
previously published field study in the aerospace industry.
Abstract: Nondestructive testing in engineering is an inverse
Cauchy problem for Laplace equation. In this paper the problem
of nondestructive testing is expressed by a Laplace-s equation with
third-kind boundary conditions. In order to find unknown values on
the boundary, the method of fundamental solution is introduced and
realized. Because of the ill-posedness of studied problems, the TSVD
regularization technique in combination with L-curve criteria and
Generalized Cross Validation criteria is employed. Numerical results
are shown that the TSVD method combined with L-curve criteria is
more efficient than the TSVD method combined with GCV criteria.
The abstract goes here.
Abstract: Recently, there have been considerable efforts towards the convergence between P2P and Grid computing in order to reach a solution that takes the best of both worlds by exploiting the advantages that each offers. Augmenting the peer-to-peer model to the services of the Grid promises to eliminate bottlenecks and ensure greater scalability, availability, and fault-tolerance. The Grid Information Service (GIS) directly influences quality of service for grid platforms. Most of the proposed solutions for decentralizing the GIS are based on completely flat overlays. The main contributions for this paper are: the investigation of a novel resource discovery framework for Grid implementations based on a hierarchy of structured peer-to-peer overlay networks, and introducing a discovery algorithm utilizing the proposed framework. Validation of the framework-s performance is done via simulation. Experimental results show that the proposed organization has the advantage of being scalable while providing fault-isolation, effective bandwidth utilization, and hierarchical access control. In addition, it will lead to a reliable, guaranteed sub-linear search which returns results within a bounded interval of time and with a smaller amount of generated traffic within each domain.
Abstract: Today, transport and logistic systems are often tightly
integrated in the production. Lean production and just-in-time delivering create multiple constraints that have to be fulfilled. As transport networks often have evolved over time they are very
expensive to change. This paper describes a discrete-event-simulation
system which simulates transportation models using real time
resource routing and collision avoidance. It allows for the
specification of own control algorithms and validation of new
strategies. The simulation is integrated into a virtual reality (VR)
environment and can be displayed in 3-D to show the progress.
Simulation elements can be selected through VR metaphors. All data
gathered during the simulation can be presented as a detailed summary afterwards. The included cost-benefit calculation can help to optimize the financial outcome. The operation of this approach is shown by the example of a timber harvest simulation.
Abstract: This paper presents the study of hardness profile of spur gear heated by induction heating process in function of the machine parameters, such as the power (kW), the heating time (s) and the generator frequency (kHz). The global work is realized by 3D finite-element simulation applied to the process by coupling and resolving the electromagnetic field and the heat transfer problems, and it was performed in three distinguished steps. First, a Comsol 3D model was built using an adequate formulation and taking into account the material properties and the machine parameters. Second, the convergence study was conducted to optimize the mesh. Then, the surface temperatures and the case depths were deeply analyzed in function of the initial current density and the heating time in medium frequency (MF) and high frequency (HF) heating modes and the edge effect were studied. Finally, the simulations results are validated using experimental tests.
Abstract: The UK Government has emphasized the role of Local Authorities as a key player in its flagship residential energy efficiency strategies, by identifying and targeting areas for energy efficiency improvements. Residential energy consumption in England is characterized by significant geographical variation in energy demand, which makes centralized targeting of areas for energy efficiency intervention difficult. This paper draws on research which aims to understand how demographic, social, economic, urban form and climatic factors influence the geographical variations in English residential gas consumption. The paper reports the findings of a multiple regression model that shows how 64% of the geographical variation in residential gas consumption is accounted for by variations in these factors. Results from this study, after further refinement and validation, can be used by Local Authorities to identify areas within their boundaries that have higher than expected gas consumption, these may be prime targets for energy efficiency initiatives.
Abstract: This paper presents a computer simulation model based on system dynamics methodology for analyzing the dynamic characteristics of input energy structure in agriculture and Bangladesh is used here as a case study for model validation. The model provides an input energy structure linking the major energy flows with human energy and draft energy from cattle as well as tractors and/or power tillers, irrigation, chemical fertilizer and pesticide. The evaluation is made in terms of different energy dependent indicators. During the simulation period, the energy input to agriculture increased from 6.1 to 19.15 GJ/ha i.e. 2.14 fold corresponding to energy output in terms of food, fodder and fuel increase from 71.55 to 163.58 GJ/ha i.e. 1.28 fold from the base year. This result indicates that the energy input in Bangladeshi agricultural production is increasing faster than the energy output. Problems such as global warming, nutrient loading and pesticide pollution can associate with this increasing input. For an assessment, a comparative statement of input energy use in agriculture of developed countries (DCs) and least developed countries (LDCs) including Bangladesh has been made. The performance of the model is found satisfactory to analyze the agricultural energy system for LDCs
Abstract: The present study focuses on the discussion over the
parameter of Artificial Neural Network (ANN). Sensitivity analysis is
applied to assess the effect of the parameters of ANN on the prediction
of turbidity of raw water in the water treatment plant. The result shows
that transfer function of hidden layer is a critical parameter of ANN.
When the transfer function changes, the reliability of prediction of
water turbidity is greatly different. Moreover, the estimated water
turbidity is less sensitive to training times and learning velocity than
the number of neurons in the hidden layer. Therefore, it is important to
select an appropriate transfer function and suitable number of neurons
in the hidden layer in the process of parameter training and validation.
Abstract: A neuron can emit spikes in an irregular time basis and by averaging over a certain time window one would ignore a lot of information. It is known that in the context of fast information processing there is no sufficient time to sample an average firing rate of the spiking neurons. The present work shows that the spiking neurons are capable of computing the radial basis functions by storing the relevant information in the neurons' delays. One of the fundamental findings of the this research also is that when using overlapping receptive fields to encode the data patterns it increases the network-s clustering capacity. The clustering algorithm that is discussed here is interesting from computer science and neuroscience point of view as well as from a perspective.
Abstract: We present a simulation and realization of a battery
charge regulator (BCR) in microsatellite earth observation. The tests
were performed on battery pack 12volt, capacity 24Ah and the solar array open circuit voltage of 100 volt and optimum power of about
250 watt. The battery charge is made by solar module. The principle is to adapt the output voltage of the solar module to the battery by
using the technique of pulse width modulation (PWM). Among the different techniques of charge battery, we opted for the technique of
the controller ON/OFF is a standard technique and simple, it-s easy to
be board executed validation will be made by simulation "Proteus Isis
Professional software ". The circuit and the program of this prototype
are based on the PIC16F877 microcontroller, a serial interface connecting a PC is also realized, to view and save data and graphics
in real time, for visualization of data and graphs we develop an interface tool “visual basic.net (VB)--.
Abstract: As a part of the development of a numerical method of
close capture exhausts systems for machining devices, a test rig
recreating a situation similar to a grinding operation, but in a
perfectly controlled environment, is used. The properties of the
obtained spray of solid particles are initially characterized using
particle tracking velocimetry (PTV), in order to obtain input and
validation parameters for numerical simulations. The dispersion of a
tracer gas (SF6) emitted simultaneously with the particle jet is then
studied experimentally, as the dispersion of such a gas is
representative of that of finer particles, whose aerodynamic response
time is negligible. Finally, complete modeling of the test rig is
achieved to allow comparison with experimental results and thus to
progress towards validation of the models used to describe a twophase
flow generated by machining operation.
Abstract: The objective of this study was to develop and compare alternative prediction equations of lean meat proportion (LMP) of lamb carcasses. Forty (40) male lambs, 22 of Churra Galega Bragançana Portuguese local breed and 18 of Suffolk breed were used. Lambs were slaughtered, and carcasses weighed approximately 30 min later in order to obtain hot carcass weight (HCW). After cooling at 4º C for 24-h a set of seventeen carcass measurements was recorded. The left side of carcasses was dissected into muscle, subcutaneous fat, inter-muscular fat, bone, and remainder (major blood vessels, ligaments, tendons, and thick connective tissue sheets associated with muscles), and the LMP was evaluated as the dissected muscle percentage. Prediction equations of LMP were developed, and fitting quality was evaluated through the coefficient of determination of estimation (R2 e) and standard error of estimate (SEE). Models validation was performed by k-fold crossvalidation and the coefficient of determination of prediction (R2 p) and standard error of prediction (SEP) were computed. The BT2 measurement was the best single predictor and accounted for 37.8% of the LMP variation with a SEP of 2.30%. The prediction of LMP of lamb carcasses can be based simple models, using as predictors the HCW and one fat thickness measurement.