Abstract: In this paper, the concepts of dichotomous logistic
regression (DLR) with leave-one-out (L-O-O) were discussed. To
illustrate this, the L-O-O was run to determine the importance of the
simulation conditions for robust test of spread procedures with good
Type I error rates. The resultant model was then evaluated. The
discussions included 1) assessment of the accuracy of the model, and
2) parameter estimates. These were presented and illustrated by
modeling the relationship between the dichotomous dependent
variable (Type I error rates) with a set of independent variables (the
simulation conditions). The base SAS software containing PROC
LOGISTIC and DATA step functions can be making used to do the
DLR analysis.
Abstract: Mobile Ad hoc networks (MANETs) are collections
of wireless mobile nodes dynamically reconfiguring and collectively
forming a temporary network. These types of networks assume
existence of no fixed infrastructure and are often useful in battle-field
tactical operations or emergency search-and-rescue type of
operations where fixed infrastructure is neither feasible nor practical.
They also find use in ad hoc conferences, campus networks and
commercial recreational applications carrying multimedia traffic. All
of the above applications of MANETs require guaranteed levels of
performance as experienced by the end-user. This paper focuses on
key challenges in provisioning predetermined levels of such Quality
of Service (QoS). It also identifies functional areas where QoS
models are currently defined and used. Evolving functional areas
where performance and QoS provisioning may be applied are also
identified and some suggestions are provided for further research in
this area. Although each of the above functional areas have been
discussed separately in recent research studies, since these QoS
functional areas are highly correlated and interdependent, a
comprehensive and comparative analysis of these areas and their
interrelationships is desired. In this paper we have attempted to
provide such an overview.
Abstract: This paper introduces a technique of distortion
estimation in image watermarking using Genetic Programming (GP).
The distortion is estimated by considering the problem of obtaining a
distorted watermarked signal from the original watermarked signal as
a function regression problem. This function regression problem is
solved using GP, where the original watermarked signal is
considered as an independent variable. GP-based distortion
estimation scheme is checked for Gaussian attack and Jpeg
compression attack. We have used Gaussian attacks of different
strengths by changing the standard deviation. JPEG compression
attack is also varied by adding various distortions. Experimental
results demonstrate that the proposed technique is able to detect the
watermark even in the case of strong distortions and is more robust
against attacks.
Abstract: Chatter vibration has been a troublesome problem for a
machine tool toward the high precision and high speed machining.
Essentially, the machining performance is determined by the dynamic
characteristics of the machine tool structure and dynamics of cutting
process. Therefore the dynamic vibration behavior of spindle tool
system greatly determines the performance of machine tool. The
purpose of this study is to investigate the influences of the machine
frame structure on the dynamic frequency of spindle tool unit through
finite element modeling approach. To this end, a realistic finite
element model of the vertical milling system was created by
incorporated the spindle-bearing model into the spindle head stock of
the machine frame. Using this model, the dynamic characteristics of
the milling machines with different structural designs of spindle head
stock and identical spindle tool unit were demonstrated. The results of
the finite element modeling reveal that the spindle tool unit behaves
more compliant when the excited frequency approaches the natural
mode of the spindle tool; while the spindle tool show a higher dynamic
stiffness at lower frequency that may be initiated by the structural
mode of milling head. Under this condition, it is concluded that the
structural configuration of spindle head stock associated with the
vertical column of milling machine plays an important role in
determining the machining dynamics of the spindle unit.
Abstract: The bromination of five selected pharmaceuticals
(metoprolol, naproxen, amoxicillin, hydrochlorotiazide and
phenacetin) in ultrapure water and in three water matrices (a
groundwater, a surface water from a public reservoir and a secondary
effluent from a WWTP) was investigated. The apparent rate
constants for the bromination reaction were determined as a function
of the pH, and the sequence obtained for the reaction rate was
amoxicillin > naproxen >> hydrochlorotiazide ≈ phenacetin ≈
metoprolol. The proposal of a kinetic mechanism, which specifies the
dissociation of bromine and each pharmaceutical according to their
pKa values and the pH allowed the determination of the intrinsic rate
constants for every elementary reaction. The influence of the main
operating conditions (pH, initial bromine dose, and the water matrix)
on the degradation of pharmaceuticals was established. In addition,
the presence of bromide in chlorination experiments was
investigated. The presence of bromide in wastewaters and drinking
waters in the range of 10 to several hundred μg L-1 accelerated
slightly the oxidation of the selected pharmaceuticals during chorine
disinfection.
Abstract: Present paper presents a parametric performancebased
design model for optimizing hospital design. The design model
operates with geometric input parameters defining the functional
requirements of the hospital and input parameters in terms of
performance objectives defining the design requirements and
preferences of the hospital with respect to performances. The design
model takes point of departure in the hospital functionalities as a set
of defined parameters and rules describing the design requirements
and preferences.
Abstract: Background noise is particularly damaging to speech
intelligibility for people with hearing loss especially for sensorineural
loss patients. Several investigations on speech intelligibility have
demonstrated sensorineural loss patients need 5-15 dB higher SNR
than the normal hearing subjects. This paper describes Discrete
Cosine Transform Power Normalized Least Mean Square algorithm
to improve the SNR and to reduce the convergence rate of the LMS
for Sensory neural loss patients. Since it requires only real arithmetic,
it establishes the faster convergence rate as compare to time domain
LMS and also this transformation improves the eigenvalue
distribution of the input autocorrelation matrix of the LMS filter.
The DCT has good ortho-normal, separable, and energy compaction
property. Although the DCT does not separate frequencies, it is a
powerful signal decorrelator. It is a real valued function and thus
can be effectively used in real-time operation. The advantages of
DCT-LMS as compared to standard LMS algorithm are shown via
SNR and eigenvalue ratio computations. . Exploiting the symmetry
of the basis functions, the DCT transform matrix [AN] can be
factored into a series of ±1 butterflies and rotation angles. This
factorization results in one of the fastest DCT implementation. There
are different ways to obtain factorizations. This work uses the fast
factored DCT algorithm developed by Chen and company. The
computer simulations results show superior convergence
characteristics of the proposed algorithm by improving the SNR at
least 10 dB for input SNR less than and equal to 0 dB, faster
convergence speed and better time and frequency characteristics.
Abstract: With the help of coincidence degree theory, sufficient
conditions for existence of periodic solutions for a food chain model
with functional responses on time scales are established.
Abstract: The Programmable Logic Controller (PLC) plays a
vital role in automation and process control. Grafcet is used for
representing the control logic, and traditional programming
languages are used for describing the pure algorithms. Grafcet is used
for dividing the process to be automated in elementary sequences that
can be easily implemented. Each sequence represent a step that has
associated actions programmed using textual or graphical languages
after case. The programming task is simplified by using a set of
subroutines that are used in several steps. The paper presents an
example of implementation for a punching machine for sheets and
plates. The use the graphical languages the programming of a
complex sequential process is a necessary solution. The state of
Grafcet can be used for debugging and malfunction determination.
The use of the method combined with a set of knowledge acquisition
for process application reduces the downtime of the machine and
improve the productivity.
Abstract: This paper presents an application of particle swarm
optimization (PSO) to the grounding grid planning which compares to
the application of genetic algorithm (GA). Firstly, based on IEEE
Std.80, the cost function of the grounding grid and the constraints of
ground potential rise, step voltage and touch voltage are constructed
for formulating the optimization problem of grounding grid planning.
Secondly, GA and PSO algorithms for obtaining optimal solution of
grounding grid are developed. Finally, a case of grounding grid
planning is shown the superiority and availability of the PSO
algorithm and proposal planning results of grounding grid in cost and
computational time.
Abstract: The effect of the number of quantum dot (QD) layers
on the saturated gain of doped QD semiconductor optical amplifiers
(SOAs) has been studied using multi-population coupled rate
equations. The developed model takes into account the effect of
carrier coupling between adjacent layers. It has been found that
increasing the number of QD layers (K) increases the unsaturated
optical gain for K
Abstract: Search for a tertiary substructure that geometrically
matches the 3D pattern of the binding site of a well-studied protein provides a solution to predict protein functions. In our previous work,
a web server has been built to predict protein-ligand binding sites
based on automatically extracted templates. However, a drawback of such templates is that the web server was prone to resulting in many
false positive matches. In this study, we present a sequence-order constraint to reduce the false positive matches of using automatically
extracted templates to predict protein-ligand binding sites. The binding site predictor comprises i) an automatically constructed template library and ii) a local structure alignment algorithm for
querying the library. The sequence-order constraint is employed to
identify the inconsistency between the local regions of the query protein and the templates. Experimental results reveal that the sequence-order constraint can largely reduce the false positive matches and is effective for template-based binding site prediction.
Abstract: The solvated electron is self-trapped (polaron) owing
to strong interaction with the quantum polarization field. If the
electron and quantum field are strongly coupled then the collective
localized state of the field and quasi-particle is formed. In such a
formation the electron motion is rather intricate. On the one hand the
electron oscillated within a rather deep polarization potential well
and undergoes the optical transitions, and on the other, it moves
together with the center of inertia of the system and participates in
the thermal random walk. The problem is to separate these motions
correctly, rigorously taking into account the conservation laws. This
can be conveniently done using Bogolyubov-Tyablikov method of
canonical transformation to the collective coordinates. This
transformation removes the translational degeneracy and allows one
to develop the successive approximation algorithm for the energy and
wave function while simultaneously fulfilling the law of conservation
of total momentum of the system. The resulting equations determine
the electron transitions and depend explicitly on the translational
velocity of the quasi-particle as whole. The frequency of optical
transition is calculated for the solvated electron in ammonia, and an
estimate is made for the thermal-induced spectral bandwidth.
Abstract: In the visual servoing systems, the data obtained by
Visionary is used for controlling robots. In this project, at first the
simulator which was proposed for simulating the performance of a
6R robot before, was examined in terms of software and test, and in
the proposed simulator, existing defects were obviated. In the first
version of simulation, the robot was directed toward the target object only in a Position-based method using two cameras in the
environment. In the new version of the software, three cameras were used simultaneously. The camera which is installed as eye-inhand on the end-effector of the robot is used for visual servoing in a
Feature-based method. The target object is recognized according to
its characteristics and the robot is directed toward the object in compliance with an algorithm similar to the function of human-s
eyes. Then, the function and accuracy of the operation of the robot are examined through Position-based visual servoing method using
two cameras installed as eye-to-hand in the environment. Finally, the obtained results are tested under ANSI-RIA R15.05-2 standard.
Abstract: Sensory nerves in the foot play an important part in the diagnosis of various neuropathydisorders, especially in diabetes mellitus.However, a detailed description of the anatomical distribution of the nerves is currently lacking. A computationalmodel of the afferent nerves inthe foot may bea useful tool for the study of diabetic neuropathy. In this study, we present the development of an anatomically-based model of various major sensory nerves of the sole and dorsal sidesof the foot. In addition, we presentan algorithm for generating synthetic somatosensory nerve networks in the big-toe region of a right foot model. The algorithm was based on a modified version of the Monte Carlo algorithm, with the capability of being able to vary the intra-epidermal nerve fiber density in differentregionsof the foot model. Preliminary results from the combinedmodel show the realistic anatomical structure of the major nerves as well as the smaller somatosensory nerves of the foot. The model may now be developed to investigate the functional outcomes of structural neuropathyindiabetic patients.
Abstract: In many data mining applications, it is a priori known
that the target function should satisfy certain constraints imposed
by, for example, economic theory or a human-decision maker. In this
paper we consider partially monotone prediction problems, where the
target variable depends monotonically on some of the input variables
but not on all. We propose a novel method to construct prediction
models, where monotone dependences with respect to some of
the input variables are preserved by virtue of construction. Our
method belongs to the class of mixture models. The basic idea is to
convolute monotone neural networks with weight (kernel) functions
to make predictions. By using simulation and real case studies,
we demonstrate the application of our method. To obtain sound
assessment for the performance of our approach, we use standard
neural networks with weight decay and partially monotone linear
models as benchmark methods for comparison. The results show that
our approach outperforms partially monotone linear models in terms
of accuracy. Furthermore, the incorporation of partial monotonicity
constraints not only leads to models that are in accordance with the
decision maker's expertise, but also reduces considerably the model
variance in comparison to standard neural networks with weight
decay.
Abstract: Rotating stages in semiconductor, display industry and many other fields require challenging accuracy to perform their functions properly. Especially, Axis of rotation error on rotary system is significant; such as the spindle error motion of the aligner, wire bonder and inspector machine which result in the poor state of manufactured goods. To evaluate and improve the performance of such precision rotary stage, unessential movements on the other 5 degrees of freedom of the rotary stage must be measured and analyzed. In this paper, we have measured the three translations and two tilt motions of a rotating stage with high precision capacitive sensors. To obtain the radial error motion from T.I.R (Total Indicated Reading) of radial direction, we have used Donaldson's reversal technique. And the axial components of the spindle tilt error motion can be obtained accurately from the axial direction outputs of sensors by Estler face motion reversal technique. Further more we have defined and measured the sensitivity of positioning error to the five error motions.
Abstract: Imperfect knowledge cannot be avoided all the time. Imperfections may have several forms; uncertainties, imprecision and incompleteness. When we look to classification of methods for the management of imperfect knowledge we see fuzzy set-based techniques. The choice of a method to process data is linked to the choice of knowledge representation, which can be numerical, symbolic, logical or semantic and it depends on the nature of the problem to be solved for example decision support, which will be mentioned in our study. Fuzzy Logic is used for its ability to manage imprecise knowledge, but it can take advantage of the ability of neural networks to learn coefficients or functions. Such an association of methods is typical of so-called soft computing. In this study a new method was used for the management of imprecision for collected knowledge which related to economic analysis of construction industry in Turkey. Because of sudden changes occurring in economic factors decrease competition strength of construction companies. The better evaluation of these changes in economical factors in view of construction industry will made positive influence on company-s decisions which are dealing construction.
Abstract: This paper describes a study of geometrically
nonlinear free vibration of thin circular functionally graded (CFGP)
plates resting on Winkler elastic foundations. The material properties
of the functionally graded composites examined here are assumed to
be graded smoothly and continuously through the direction of the
plate thickness according to a power law and are estimated using the
rule of mixture. The theoretical model is based on the classical Plate
theory and the Von-Kármán geometrical nonlinearity assumptions.
An homogenization procedure (HP) is developed to reduce the
problem considered here to that of isotropic homogeneous circular
plates resting on Winkler foundation. Hamilton-s principle is applied
and a multimode approach is derived to calculate the fundamental
nonlinear frequency parameters which are found to be in a good
agreement with the published results. On the other hand, the
influence of the foundation parameters on the nonlinear fundamental
frequency has also been analysed.
Abstract: In this paper we explore the application of a formal proof system to verification problems in cryptography. Cryptographic properties concerning correctness or security of some cryptographic algorithms are of great interest. Beside some basic lemmata, we explore an implementation of a complex function that is used in cryptography. More precisely, we describe formal properties of this implementation that we computer prove. We describe formalized probability distributions (o--algebras, probability spaces and condi¬tional probabilities). These are given in the formal language of the formal proof system Isabelle/HOL. Moreover, we computer prove Bayes' Formula. Besides we describe an application of the presented formalized probability distributions to cryptography. Furthermore, this paper shows that computer proofs of complex cryptographic functions are possible by presenting an implementation of the Miller- Rabin primality test that admits formal verification. Our achievements are a step towards computer verification of cryptographic primitives. They describe a basis for computer verification in cryptography. Computer verification can be applied to further problems in crypto-graphic research, if the corresponding basic mathematical knowledge is available in a database.