Abstract: Many recent high energy physics calculations
involving charm and beauty invoke wave function at the origin
(WFO) for the meson bound state. Uncertainties of charm and beauty
quark masses and different models for potentials governing these
bound states require a simple numerical algorithm for evaluation of
the WFO's for these bound states. We present a simple algorithm for
this propose which provides WFO's with high precision compared
with similar ones already obtained in the literature.
Abstract: PPX(Pretty Printer for XML) is a query language that offers a concise description method of formatting the XML data into HTML. In this paper, we propose a simple specification of formatting method that is a combination description of automatic layout operators and variables in the layout expression of the GENERATE clause of PPX. This method can automatically format irregular XML data included in a part of XML with layout decision rule that is referred to DTD. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing same tasks.
Abstract: The present models and simulation algorithms of intracellular stochastic kinetics are usually based on the premise that diffusion is so fast that the concentrations of all the involved species are homogeneous in space. However, recents experimental measurements of intracellular diffusion constants indicate that the assumption of a homogeneous well-stirred cytosol is not necessarily valid even for small prokaryotic cells. In this work a mathematical treatment of diffusion that can be incorporated in a stochastic algorithm simulating the dynamics of a reaction-diffusion system is presented. The movement of a molecule A from a region i to a region j of the space is represented as a first order reaction Ai k- ! Aj , where the rate constant k depends on the diffusion coefficient. The diffusion coefficients are modeled as function of the local concentration of the solutes, their intrinsic viscosities, their frictional coefficients and the temperature of the system. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the intrinsic reaction kinetics and diffusion dynamics. To demonstrate the method the simulation results of the reaction-diffusion system of chaperoneassisted protein folding in cytoplasm are shown.
Abstract: Many footbridges have natural frequencies that
coincide with the dominant frequencies of the pedestrian-induced
load and therefore they have a potential to suffer excessive vibrations
under dynamic loads induced by pedestrians. Some of the design
standards introduce load models for pedestrian loads applicable for
simple structures. Load modeling for more complex structures, on the
other hand, is most often left to the designer. The main focus of this
paper is on the human induced forces transmitted to a footbridge and
on the ways these loads can be modeled to be used in the dynamic
design of footbridges. Also design criteria and load models proposed
by widely used standards were introduced and a comparison was
made. The dynamic analysis of the suspension bridge in Kolin in the
Czech Republic was performed on detailed FEM model using the
ANSYS program system. An attempt to model the load imposed by a
single person and a crowd of pedestrians resulted in displacements
and accelerations that are compared with serviceability criteria.
Abstract: Business process model describes process flow of a
business and can be seen as the requirement for developing a
software application. This paper discusses a BPM2CD guideline
which complements the Model Driven Architecture concept by
suggesting how to create a platform-independent software model in
the form of a UML class diagram from a business process model. An
important step is the identification of UML classes from the business
process model. A technique for object-oriented analysis called
domain analysis is borrowed and key concepts in the business
process model will be discovered and proposed as candidate classes
for the class diagram. The paper enhances this step by using ontology
search to help identify important classes for the business domain. As
ontology is a source of knowledge for a particular domain which
itself can link to ontologies of related domains, the search can give a
refined set of candidate classes for the resulting class diagram.
Abstract: Deep and radical social reforms of the last century-s
nineties in many Eastern European countries caused changes in
Information Technology-s (IT) field. Inefficient information
technologies were rapidly replaced with forefront IT solutions, e.g.,
in Eastern European countries there is a high level penetration of
qualitative high-speed Internet. The authors have taken part in the
introduction of those changes in Latvia-s leading IT research
institute. Grounding on their experience authors in this paper offer an
IT services based model for analysis the mentioned changes- and
development processes in the higher education and research fields,
i.e., for research e-infrastructure-s development. Compare to the
international practice such services were developed in Eastern Europe
in an untraditional way, which provided swift and positive
technological changes.
Abstract: This paper proposes a fast code acquisition scheme for
optical code division multiple access (O-CDMA) systems. Unlike the
conventional scheme, the proposed scheme employs multiple thresholds
providing a shorter mean acquisition time (MAT) performance.
The simulation results show that the MAT of the proposed scheme
is shorter than that of the conventional scheme.
Abstract: Sputum smear conversion after one month of antituberculosis
therapy in new smear positive pulmonary tuberculosis
patients (PTB+) is a vital indicator towards treatment success. The
objective of this study is to determine the rate of sputum smear
conversion in new PTB+ patients after one month under treatment of
National Institute of Diseases of the Chest and Hospital (NIDCH).
Analysis of sputum smear conversion was done by re-clinical
examination with sputum smear microscopic test after one month.
Socio-demographic and hematological parameters were evaluated to
perceive the correlation with the disease status. Among all enrolled
patients only 33.33% were available for follow up diagnosis and of
them only 42.86% patients turned to smear negative. Probably this
consequence is due to non-coherence to the proper disease
management. 66.67% and 78.78% patients reported low haemoglobin
and packed cell volume level respectively whereas 80% and 93.33%
patients accounted accelerated platelet count and erythrocyte
sedimentation rate correspondingly.
Abstract: Chemically defined Schlegel-s medium was modified
to improve production of cell growth and other metabolites that are
produced by fluorescent pseudomonad R62 strain. The modified
medium does not require pH control as pH changes are kept within ±
0.2 units of the initial pH 7.1 during fermentation. The siderophore
production was optimized for the fluorescent pseudomonad strain in
the modified medium containing 1% glycerol as a major carbon
source supplemented with 0.05% succinic acid and 0.5% Ltryptophan.
Indole-3 acetic acid (IAA) production was higher when
L-tryptophan was used at 0.5%. The 2,4- diacetylphloroglucinol
(DAPG) was higher with amended three trace elements in medium.
The optimized medium produced 2.28 g/l of dry cell mass and 900
mg/l of siderophore at the end of 36 h cultivation, while the
production levels of IAA and DAPG were 65 mg/l and 81 mg/l
respectively at the end of 48 h cultivation.
Abstract: This article outlines conceptualization and
implementation of an intelligent system capable of extracting
knowledge from databases. Use of hybridized features of both the
Rough and Fuzzy Set theory render the developed system flexibility
in dealing with discreet as well as continuous datasets. A raw data set
provided to the system, is initially transformed in a computer legible
format followed by pruning of the data set. The refined data set is
then processed through various Rough Set operators which enable
discovery of parameter relationships and interdependencies. The
discovered knowledge is automatically transformed into a rule base
expressed in Fuzzy terms. Two exemplary cancer repository datasets
(for Breast and Lung Cancer) have been used to test and implement
the proposed framework.
Abstract: Camera calibration is an indispensable step for augmented
reality or image guided applications where quantitative information
should be derived from the images. Usually, a camera
calibration is obtained by taking images of a special calibration object
and extracting the image coordinates of projected calibration marks
enabling the calculation of the projection from the 3d world coordinates
to the 2d image coordinates. Thus such a procedure exhibits
typical steps, including feature point localization in the acquired
images, camera model fitting, correction of distortion introduced by
the optics and finally an optimization of the model-s parameters. In
this paper we propose to extend this list by further step concerning
the identification of the optimal subset of images yielding the smallest
overall calibration error. For this, we present a Monte Carlo based
algorithm along with a deterministic extension that automatically
determines the images yielding an optimal calibration. Finally, we
present results proving that the calibration can be significantly
improved by automated image selection.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: The conjugate gradient optimization algorithm
usually used for nonlinear least squares is presented and is
combined with the modified back propagation algorithm yielding
a new fast training multilayer perceptron (MLP) algorithm
(CGFR/AG). The approaches presented in the paper consist of
three steps: (1) Modification on standard back propagation
algorithm by introducing gain variation term of the activation
function, (2) Calculating the gradient descent on error with
respect to the weights and gains values and (3) the determination
of the new search direction by exploiting the information
calculated by gradient descent in step (2) as well as the previous
search direction. The proposed method improved the training
efficiency of back propagation algorithm by adaptively modifying
the initial search direction. Performance of the proposed method
is demonstrated by comparing to the conjugate gradient algorithm
from neural network toolbox for the chosen benchmark. The
results show that the number of iterations required by the
proposed method to converge is less than 20% of what is required
by the standard conjugate gradient and neural network toolbox
algorithm.
Abstract: This paper presents the buckling analysis of short and
long functionally graded cylindrical shells under thermal and
mechanical loads. The shell properties are assumed to vary
continuously from the inner surface to the outer surface of the shell.
The equilibrium and stability equations are derived using the total
potential energy equations, Euler equations and first order shear
deformation theory assumptions. The resulting equations are solved
for simply supported boundary conditions. The critical temperature
and pressure loads are calculated for both short and long cylindrical
shells. Comparison studies show the effects of functionally graded
index, loading type and shell geometry on critical buckling loads of
short and long functionally graded cylindrical shells.
Abstract: This paper deals with the conceptual design of the
new aeroelastic demonstrator for the whirl flutter simulation. The
paper gives a theoretical background of the whirl flutter phenomenon
and describes the events of the whirl flutter occurrence in the
aerospace practice. The second part is focused on the experimental
research of the whirl flutter on aeroelastic similar models. Finally the
concept of the new aeroelastic demonstrator is described. The
demonstrator represents the wing and engine of the twin turboprop
commuter aircraft including a driven propeller. It allows the changes
of the main structural parameters influencing the whirl flutter
stability characteristics. It is intended for the experimental
investigation of the whirl flutter in the wind tunnel. The results will
be utilized for validation of analytical methods and software tools.
Abstract: This article discusses the customs and traditions in
Turkestan in the late XIXth and early XXth centuries. Having a long
history, Turkestan is well-known as the birthplace of many nations
and nationalities. The name of Turkestan is also given to it for a
reason - the land of the Turkic peoples who inhabited Central Asia
and united under together. Currently, nations and nationalities of the
Turkestan region formed their own sovereign states, and every year
they prove their country names in the world community. Political,
economic importance of Turkestan, which became the gold wire
between Asia and Europe was always very high. So systematically
various aggressive actions were made by several great powers. As a
result of expansionary policy of colonization of the Russian Empire -
the Turkestan has appeared.
Abstract: Using mini modules of Tmotes, it is possible to automate a small personal area network. This idea can be extended to large networks too by implementing multi-hop routing. Linking the various Tmotes using Programming languages like Nesc, Java and having transmitter and receiver sections, a network can be monitored. It is foreseen that, depending on the application, a long range at a low data transfer rate or average throughput may be an acceptable trade-off. To reduce the overall costs involved, an optimum number of Tmotes to be used under various conditions (Indoor/Outdoor) is to be deduced. By analyzing the data rates or throughputs at various locations of Tmotes, it is possible to deduce an optimal number of Tmotes for a specific network. This paper deals with the determination of optimum distances to reduce the cost and increase the reliability of the entire sensor network with Wireless Local Loop (WLL) capability.
Abstract: In present work, drying characteristics of fresh papaya (Carica papaya L.) was studied to understand the dehydration process and its behavior. Drying experiments were carried out by a laboratory scaled microwave-vacuum oven. The parameters affecting drying characteristics including operating modes (continuous, pulsed), microwave power (400 and 800 W), and vacuum pressure (20, 30, and 40 cmHg) were investigated. For pulsed mode, two levels of power-off time (60 and 120 s) were used while the power-on time was fixed at 60 s and the vacuum pressure was fixed at 40 cmHg. For both operating modes, the effects of drying conditions on drying time, drying rate, and effective diffusivity were investigated. The results showed high microwave power, high vacuum, and pulsed mode of 60 s-on/60 s-off favored drying rate as shown by the shorten drying time and increased effective diffusivity. The drying characteristics were then described by Page-s model, which showed a good agreement with experimental data.
Abstract: Computations with higher than the IEEE 754 standard double-precision (about 16 significant digits) are required recently. Although there are available software routines in Fortran and C for high-precision computation, users are required to implement such routines in their own computers with detailed knowledges about them. We have constructed an user-friendly online system for octupleprecision computation. In our Web system users with no knowledges about high-precision computation can easily perform octupleprecision computations, by choosing mathematical functions with argument(s) inputted, by writing simple mathematical expression(s) or by uploading C program(s). In this paper we enhance the Web system above by adding the facility of uploading Fortran programs, which have been widely used in scientific computing. To this end we construct converter routines in two stages.
Abstract: In this paper we present a method for gene ranking
from DNA microarray data. More precisely, we calculate the correlation
networks, which are unweighted and undirected graphs, from
microarray data of cervical cancer whereas each network represents
a tissue of a certain tumor stage and each node in the network
represents a gene. From these networks we extract one tree for
each gene by a local decomposition of the correlation network. The
interpretation of a tree is that it represents the n-nearest neighbor
genes on the n-th level of a tree, measured by the Dijkstra distance,
and, hence, gives the local embedding of a gene within the correlation
network. For the obtained trees we measure the pairwise similarity
between trees rooted by the same gene from normal to cancerous
tissues. This evaluates the modification of the tree topology due to
progression of the tumor. Finally, we rank the obtained similarity
values from all tissue comparisons and select the top ranked genes.
For these genes the local neighborhood in the correlation networks
changes most between normal and cancerous tissues. As a result
we find that the top ranked genes are candidates suspected to be
involved in tumor growth and, hence, indicates that our method
captures essential information from the underlying DNA microarray
data of cervical cancer.