Abstract: Many recent high energy physics calculations
involving charm and beauty invoke wave function at the origin
(WFO) for the meson bound state. Uncertainties of charm and beauty
quark masses and different models for potentials governing these
bound states require a simple numerical algorithm for evaluation of
the WFO's for these bound states. We present a simple algorithm for
this propose which provides WFO's with high precision compared
with similar ones already obtained in the literature.
Abstract: The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.
Abstract: Many footbridges have natural frequencies that
coincide with the dominant frequencies of the pedestrian-induced
load and therefore they have a potential to suffer excessive vibrations
under dynamic loads induced by pedestrians. Some of the design
standards introduce load models for pedestrian loads applicable for
simple structures. Load modeling for more complex structures, on the
other hand, is most often left to the designer. The main focus of this
paper is on the human induced forces transmitted to a footbridge and
on the ways these loads can be modeled to be used in the dynamic
design of footbridges. Also design criteria and load models proposed
by widely used standards were introduced and a comparison was
made. The dynamic analysis of the suspension bridge in Kolin in the
Czech Republic was performed on detailed FEM model using the
ANSYS program system. An attempt to model the load imposed by a
single person and a crowd of pedestrians resulted in displacements
and accelerations that are compared with serviceability criteria.
Abstract: Business process model describes process flow of a
business and can be seen as the requirement for developing a
software application. This paper discusses a BPM2CD guideline
which complements the Model Driven Architecture concept by
suggesting how to create a platform-independent software model in
the form of a UML class diagram from a business process model. An
important step is the identification of UML classes from the business
process model. A technique for object-oriented analysis called
domain analysis is borrowed and key concepts in the business
process model will be discovered and proposed as candidate classes
for the class diagram. The paper enhances this step by using ontology
search to help identify important classes for the business domain. As
ontology is a source of knowledge for a particular domain which
itself can link to ontologies of related domains, the search can give a
refined set of candidate classes for the resulting class diagram.
Abstract: Deep and radical social reforms of the last century-s
nineties in many Eastern European countries caused changes in
Information Technology-s (IT) field. Inefficient information
technologies were rapidly replaced with forefront IT solutions, e.g.,
in Eastern European countries there is a high level penetration of
qualitative high-speed Internet. The authors have taken part in the
introduction of those changes in Latvia-s leading IT research
institute. Grounding on their experience authors in this paper offer an
IT services based model for analysis the mentioned changes- and
development processes in the higher education and research fields,
i.e., for research e-infrastructure-s development. Compare to the
international practice such services were developed in Eastern Europe
in an untraditional way, which provided swift and positive
technological changes.
Abstract: Camera calibration is an indispensable step for augmented
reality or image guided applications where quantitative information
should be derived from the images. Usually, a camera
calibration is obtained by taking images of a special calibration object
and extracting the image coordinates of projected calibration marks
enabling the calculation of the projection from the 3d world coordinates
to the 2d image coordinates. Thus such a procedure exhibits
typical steps, including feature point localization in the acquired
images, camera model fitting, correction of distortion introduced by
the optics and finally an optimization of the model-s parameters. In
this paper we propose to extend this list by further step concerning
the identification of the optimal subset of images yielding the smallest
overall calibration error. For this, we present a Monte Carlo based
algorithm along with a deterministic extension that automatically
determines the images yielding an optimal calibration. Finally, we
present results proving that the calibration can be significantly
improved by automated image selection.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: This conference paper discusses a risk allocation problem for subprime investing banks involving investment in subprime structured mortgage products (SMPs) and Treasuries. In order to solve this problem, we develop a L'evy process-based model of jump diffusion-type for investment choice in subprime SMPs and Treasuries. This model incorporates subprime SMP losses for which credit default insurance in the form of credit default swaps (CDSs) can be purchased. In essence, we solve a mean swap-at-risk (SaR) optimization problem for investment which determines optimal allocation between SMPs and Treasuries subject to credit risk protection via CDSs. In this regard, SaR is indicative of how much protection investors must purchase from swap protection sellers in order to cover possible losses from SMP default. Here, SaR is defined in terms of value-at-risk (VaR). Finally, we provide an analysis of the aforementioned optimization problem and its connections with the subprime mortgage crisis (SMC).
Abstract: This paper deals with the conceptual design of the
new aeroelastic demonstrator for the whirl flutter simulation. The
paper gives a theoretical background of the whirl flutter phenomenon
and describes the events of the whirl flutter occurrence in the
aerospace practice. The second part is focused on the experimental
research of the whirl flutter on aeroelastic similar models. Finally the
concept of the new aeroelastic demonstrator is described. The
demonstrator represents the wing and engine of the twin turboprop
commuter aircraft including a driven propeller. It allows the changes
of the main structural parameters influencing the whirl flutter
stability characteristics. It is intended for the experimental
investigation of the whirl flutter in the wind tunnel. The results will
be utilized for validation of analytical methods and software tools.
Abstract: In present work, drying characteristics of fresh papaya (Carica papaya L.) was studied to understand the dehydration process and its behavior. Drying experiments were carried out by a laboratory scaled microwave-vacuum oven. The parameters affecting drying characteristics including operating modes (continuous, pulsed), microwave power (400 and 800 W), and vacuum pressure (20, 30, and 40 cmHg) were investigated. For pulsed mode, two levels of power-off time (60 and 120 s) were used while the power-on time was fixed at 60 s and the vacuum pressure was fixed at 40 cmHg. For both operating modes, the effects of drying conditions on drying time, drying rate, and effective diffusivity were investigated. The results showed high microwave power, high vacuum, and pulsed mode of 60 s-on/60 s-off favored drying rate as shown by the shorten drying time and increased effective diffusivity. The drying characteristics were then described by Page-s model, which showed a good agreement with experimental data.
Abstract: Based on the feature of model disturbances and uncertainty being compensated dynamically in auto – disturbances-rejection-controller (ADRC), a new method using ADRC is proposed for the decoupling control of dispenser longitudinal movement in big flight envelope. Developed from nonlinear model directly, ADRC is especially suitable for dynamic model that has big disturbances. Furthermore, without changing the structure and parameters of the controller in big flight envelope, this scheme can simplify the design of flight control system. The simulation results in big flight envelope show that the system achieves high dynamic performance, steady state performance and the controller has strong robustness.
Abstract: Fuel and oxidant gas delivery plate, or fuel cell
plate, is a key component of a Proton Exchange Membrane (PEM)
fuel cell. To manufacture low-cost and high performance fuel cell
plates, advanced computer modeling and finite element structure
analysis are used as virtual prototyping tools for the optimization
of the plates at the early design stage. The present study examines
thermal stress analysis of the fuel cell plates that are produced
using a patented, low-cost fuel cell plate production technique
based on screen-printing. Design optimization is applied to
minimize the maximum stress within the plate, subject to strain
constraint with both geometry and material parameters as design
variables. The study reveals the characteristics of the printed
plates, and provides guidelines for the structure and material design
of the fuel cell plate.
Abstract: This paper deals with a numerical analysis of the
transient response of composite beams with strain rate dependent
mechanical properties by use of a finite difference method. The
equations of motion based on Timoshenko beam theory are derived.
The geometric nonlinearity effects are taken into account with von
Kármán large deflection theory. The finite difference method in
conjunction with Newmark average acceleration method is applied to
solve the differential equations. A modified progressive damage
model which accounts for strain rate effects is developed based on
the material property degradation rules and modified Hashin-type
failure criteria and added to the finite difference model. The
components of the model are implemented into a computer code in
Mathematica 6. Glass/epoxy laminated composite beams with
constant and strain rate dependent mechanical properties under
dynamic load are analyzed. Effects of strain rate on dynamic
response of the beam for various stacking sequences, load and
boundary conditions are investigated.
Abstract: This paper proposed a stiffness analysis method for a
3-PRS mechanism for welding thick aluminum plate using FSW
technology. In the molding process, elastic deformation of lead-screws
and links are taken into account. This method is based on the virtual
work principle. Through a survey of the commonly used stiffness
performance indices, the minimum and maximum eigenvalues of the
stiffness matrix are used to evaluate the stiffness of the 3-PRS
mechanism. Furthermore, A FEA model has been constructed to verify
the method. Finally, we redefined the workspace using the stiffness
analysis method.
Abstract: C-control chart assumes that process nonconformities follow a Poisson distribution. In actuality, however, this Poisson distribution does not always occur. A process control for semiconductor based on a Poisson distribution always underestimates the true average amount of nonconformities and the process variance. Quality is described more accurately if a compound Poisson process is used for process control at this time. A cumulative sum (CUSUM) control chart is much better than a C control chart when a small shift will be detected. This study calculates one-sided CUSUM ARLs using a Markov chain approach to construct a CUSUM control chart with an underlying Poisson-Gamma compound distribution for the failure mechanism. Moreover, an actual data set from a wafer plant is used to demonstrate the operation of the proposed model. The results show that a CUSUM control chart realizes significantly better performance than EWMA.
Abstract: This paper introduces a process for the module level integration of computer based systems. It is based on the Six Sigma Process Improvement Model, where the goal of the process is to improve the overall quality of the system under development. We also present a conceptual framework that shows how this process can be implemented as an integration solution. Finally, we provide a partial implementation of key components in the conceptual framework.
Abstract: In this paper, a particle swarm optimization (PSO)
algorithm is proposed to solve machine loading problem in flexible
manufacturing system (FMS), with bicriterion objectives of
minimizing system unbalance and maximizing system throughput in
the occurrence of technological constraints such as available
machining time and tool slots. A mathematical model is used to
select machines, assign operations and the required tools. The
performance of the PSO is tested by using 10 sample dataset and the
results are compared with the heuristics reported in the literature. The
results support that the proposed PSO is comparable with the
algorithms reported in the literature.
Abstract: Dengue fever is an important human arboviral disease. Outbreaks are now reported quite often from many parts of the world. The number of cases involving pregnant women and infant cases are increasing every year. The illness is often severe and complications may occur. Deaths often occur because of the difficulties in early diagnosis and in the improper management of the diseases. Dengue antibodies from pregnant women are passed on to infants and this protects the infants from dengue infections. Antibodies from the mother are transferred to the fetus when it is still in the womb. In this study, we formulate a mathematical model to describe the transmission of this disease in pregnant women. The model is formulated by dividing the human population into pregnant women and non-pregnant human (men and non-pregnant women). Each class is subdivided into susceptible (S), infectious (I) and recovered (R) subclasses. We apply standard dynamical analysis to our model. Conditions for the local stability of the equilibrium points are given. The numerical simulations are shown. The bifurcation diagrams of our model are discussed. The control of this disease in pregnant women is discussed in terms of the threshold conditions.
Abstract: In this work, we improve a previously developed
segmentation scheme aimed at extracting edge information from
speckled images using a maximum likelihood edge detector. The
scheme was based on finding a threshold for the probability density
function of a new kernel defined as the arithmetic mean-to-geometric
mean ratio field over a circular neighborhood set and, in a general
context, is founded on a likelihood random field model (LRFM). The
segmentation algorithm was applied to discriminated speckle areas
obtained using simple elliptic discriminant functions based on
measures of the signal-to-noise ratio with fractional order moments.
A rigorous stochastic analysis was used to derive an exact expression
for the cumulative density function of the probability density
function of the random field. Based on this, an accurate probability
of error was derived and the performance of the scheme was
analysed. The improved segmentation scheme performed well for
both simulated and real images and showed superior results to those
previously obtained using the original LRFM scheme and standard
edge detection methods. In particular, the false alarm probability was
markedly lower than that of the original LRFM method with
oversegmentation artifacts virtually eliminated. The importance of
this work lies in the development of a stochastic-based segmentation,
allowing an accurate quantification of the probability of false
detection. Non visual quantification and misclassification in medical
ultrasound speckled images is relatively new and is of interest to
clinicians.
Abstract: The coalescer process is one of the methods for oily water treatment by increasing the oil droplet size in order to enhance the separating velocity and thus effective separation. However, the presence of surfactants in an oily emulsion can limit the obtained mechanisms due to the small oil size related with stabilized emulsion. In this regard, the purpose of this research is to improve the efficiency of the coalescer process for treating the stabilized emulsion. The effects of bed types, bed height, liquid flow rate and stage coalescer (step-bed) on the treatment efficiencies in term of COD values were studied. Note that the treatment efficiency obtained experimentally was estimated by using the COD values and oil droplet size distribution. The study has shown that the plastic media has more effective to attach with oil particles than the stainless one due to their hydrophobic properties. Furthermore, the suitable bed height (3.5 cm) and step bed (3.5 cm with 2 steps) were necessary in order to well obtain the coalescer performance. The application of step bed coalescer process in reactor has provided the higher treatment efficiencies in term of COD removal than those obtained with classical process. The proposed model for predicting the area under curve and thus treatment efficiency, based on the single collector efficiency (ηT) and the attachment efficiency (α), provides relatively a good coincidence between the experimental and predicted values of treatment efficiencies in this study.