Abstract: In this paper, the concepts of dichotomous logistic
regression (DLR) with leave-one-out (L-O-O) were discussed. To
illustrate this, the L-O-O was run to determine the importance of the
simulation conditions for robust test of spread procedures with good
Type I error rates. The resultant model was then evaluated. The
discussions included 1) assessment of the accuracy of the model, and
2) parameter estimates. These were presented and illustrated by
modeling the relationship between the dichotomous dependent
variable (Type I error rates) with a set of independent variables (the
simulation conditions). The base SAS software containing PROC
LOGISTIC and DATA step functions can be making used to do the
DLR analysis.
Abstract: The process of laser absorption in the skin during
laser irradiation was a critical point in medical application
treatments. Delivery the correct amount of laser light is a critical
element in photodynamic therapy (PDT). More amounts of laser
light able to affect tissues in the skin and small amount not able to
enhance PDT procedure in skin. The knowledge of the skin tone
laser dependent distribution of 635 nm radiation and its penetration
depth in skin is a very important precondition for the investigation of
advantage laser induced effect in (PDT) in epidermis diseases
(psoriasis). The aim of this work was to estimate an optimum effect
of diode laser (635 nm) on the treatment of epidermis diseases in
different color skin. Furthermore, it is to improve safety of laser in
PDT in epidermis diseases treatment. Advanced system analytical
program (ASAP) which is a new approach in investigating the PDT,
dependent on optical properties of different skin color was used in
present work. A two layered Realistic Skin Model (RSM); stratum
corneum and epidermal with red laser (635 nm, 10 mW) were used
for irradiative transfer to study fluence and absorbance in different
penetration for various human skin colors. Several skin tones very
fair, fair, light, medium and dark are used to irradiative transfer. This
investigation involved the principles of laser tissue interaction when
the skin optically injected by a red laser diode. The results
demonstrated that the power characteristic of a laser diode (635 nm)
can affect the treatment of epidermal disease in various color skins.
Power absorption of the various human skins were recorded and
analyzed in order to find the influence of the melanin in PDT
treatment in epidermal disease. A two layered RSM show that the
change in penetration depth in epidermal layer of the color skin has a
larger effect on the distribution of absorbed laser in the skin; this is
due to the variation of the melanin concentration for each color.
Abstract: This paper introduces a technique of distortion
estimation in image watermarking using Genetic Programming (GP).
The distortion is estimated by considering the problem of obtaining a
distorted watermarked signal from the original watermarked signal as
a function regression problem. This function regression problem is
solved using GP, where the original watermarked signal is
considered as an independent variable. GP-based distortion
estimation scheme is checked for Gaussian attack and Jpeg
compression attack. We have used Gaussian attacks of different
strengths by changing the standard deviation. JPEG compression
attack is also varied by adding various distortions. Experimental
results demonstrate that the proposed technique is able to detect the
watermark even in the case of strong distortions and is more robust
against attacks.
Abstract: This paper evaluates the association between
economic environment in the districts of Madrid (Spain) and physical
inactivity, using income per capita as indicator of economic
environment. The analysis included 6,601 individuals aged 16 to 74
years. The measure of association estimated was the prevalence odds
ratio for physical inactivity by income per capita. After adjusting for
sex, age, and individual socioeconomic characteristics, people living
in the districts with the lowest per capita income had an odds ratio for
physical inactivity 1.58 times higher (95% confidence interval 1.35 to
1.85) than those living in districts with the highest per capita income.
Additional adjustment for the availability of sports facilities in each
district did not decrease the magnitude of the association. These
findings show that the widely believed assumption that the
availability of sports and recreational facilities, as a possible
explanation for the relation between economic environment and
physical inactivity, cannot be considered a universal observation.
Abstract: The rapid growth of e-Commerce services is
significantly observed in the past decade. However, the method to
verify the authenticated users still widely depends on numeric
approaches. A new search on other verification methods suitable for
online e-Commerce is an interesting issue. In this paper, a new online
signature-verification method using angular transformation is
presented. Delay shifts existing in online signatures are estimated by
the estimation method relying on angle representation. In the
proposed signature-verification algorithm, all components of input
signature are extracted by considering the discontinuous break points
on the stream of angular values. Then the estimated delay shift is
captured by comparing with the selected reference signature and the
error matching can be computed as a main feature used for verifying
process. The threshold offsets are calculated by two types of error
characteristics of the signature verification problem, False Rejection
Rate (FRR) and False Acceptance Rate (FAR). The level of these two
error rates depends on the decision threshold chosen whose value is
such as to realize the Equal Error Rate (EER; FAR = FRR). The
experimental results show that through the simple programming,
employed on Internet for demonstrating e-Commerce services, the
proposed method can provide 95.39% correct verifications and 7%
better than DP matching based signature-verification method. In
addition, the signature verification with extracting components
provides more reliable results than using a whole decision making.
Abstract: In this paper, an H1-Galerkin mixed finite element method is discussed for the coupled Burgers equations. The optimal error estimates of the semi-discrete and fully discrete schemes of the coupled Burgers equation are derived.
Abstract: The Carrier Frequency Offset (CFO) due to timevarying
fading channel is the main cause of the loss of orthogonality
among OFDM subcarriers which is linked to inter-carrier interference
(ICI). Hence, it is necessary to precisely estimate and compensate the
CFO. Especially for mobile broadband communications, CFO and
channel gain also have to be estimated and tracked to maintain the
system performance. Thus, synchronization pilots are embedded in
every OFDM symbol to track the variations. In this paper, we present
the pilot scheme for both channel and CFO estimation where channel
estimation process can be carried out with only one OFDM symbol.
Additional, the proposed pilot scheme also provides better
performance in CFO estimation comparing with the conventional
orthogonal pilot scheme due to the increasing of signal-tointerference
ratio.
Abstract: The solvated electron is self-trapped (polaron) owing
to strong interaction with the quantum polarization field. If the
electron and quantum field are strongly coupled then the collective
localized state of the field and quasi-particle is formed. In such a
formation the electron motion is rather intricate. On the one hand the
electron oscillated within a rather deep polarization potential well
and undergoes the optical transitions, and on the other, it moves
together with the center of inertia of the system and participates in
the thermal random walk. The problem is to separate these motions
correctly, rigorously taking into account the conservation laws. This
can be conveniently done using Bogolyubov-Tyablikov method of
canonical transformation to the collective coordinates. This
transformation removes the translational degeneracy and allows one
to develop the successive approximation algorithm for the energy and
wave function while simultaneously fulfilling the law of conservation
of total momentum of the system. The resulting equations determine
the electron transitions and depend explicitly on the translational
velocity of the quasi-particle as whole. The frequency of optical
transition is calculated for the solvated electron in ammonia, and an
estimate is made for the thermal-induced spectral bandwidth.
Abstract: In the present study, the surface temperature history of the adaptor part in a two-stage supersonic launch vehicle is accurately predicted. The full Navier-Stokes equations are used to estimate the aerodynamic heat flux and the one-dimensional heat conduction in solid phase is used to compute the temperature history. The instantaneous surface temperature is used to improve the applied heat flux, to improve the accuracy of the results.
Abstract: This paper describes a study of geometrically
nonlinear free vibration of thin circular functionally graded (CFGP)
plates resting on Winkler elastic foundations. The material properties
of the functionally graded composites examined here are assumed to
be graded smoothly and continuously through the direction of the
plate thickness according to a power law and are estimated using the
rule of mixture. The theoretical model is based on the classical Plate
theory and the Von-Kármán geometrical nonlinearity assumptions.
An homogenization procedure (HP) is developed to reduce the
problem considered here to that of isotropic homogeneous circular
plates resting on Winkler foundation. Hamilton-s principle is applied
and a multimode approach is derived to calculate the fundamental
nonlinear frequency parameters which are found to be in a good
agreement with the published results. On the other hand, the
influence of the foundation parameters on the nonlinear fundamental
frequency has also been analysed.
Abstract: In present work the problem of the ITER fusion
plasma neutron source parameter reconstruction using only the
Vertical Neutron Camera data was solved. The possibility of neutron
source parameter reconstruction was estimated by the numerical
simulations and the analysis of adequateness of mathematic model
was performed. The neutron source was specified in a parametric
form. The numerical analysis of solution stability with respect to data
distortion was done. The influence of the data errors on the
reconstructed parameters is shown:
• is reconstructed with errors less than 4% at all examined values
of δ (until 60%);
• is determined with errors less than 10% when δ do not overcome
5%;
• is reconstructed with relative error more than 10 %;
• integral intensity of the neutron source is determined with error
10% while δ error is less than 15%;
where -error of signal measurements, (R0,Z0), the plasma center
position,- /parameter of neutron source profile.
Abstract: In this paper, a fast motion compensation algorithm is
proposed that improves coding efficiency for video sequences with
brightness variations. We also propose a cross entropy measure
between histograms of two frames to detect brightness variations. The
framewise brightness variation parameters, a multiplier and an offset
field for image intensity, are estimated and compensated. Simulation
results show that the proposed method yields a higher peak signal to
noise ratio (PSNR) compared with the conventional method, with a
greatly reduced computational load, when the video scene contains
illumination changes.
Abstract: Synchronization is a difficult problem in CDMA
satellite communications. Due to the influence of additive noise and
fading in the mobile channel, it is not easy to keep up with the
attenuation and offset. This paper considers a recently proposed
approach to solve the problem of synchronization chaotic Chen
system in CDMA satellite communication in the presence of constant
attenuation and offset. An analytic algorithm that provides closed
form channel and carrier offset estimates is presented. The principle
of this approach is based on adding a compensation block before the
receiver to compensate the distortion of the imperfect channel by
using genetic algorithm.
The resultants presented, show that the receiver is able to recover
rapidly the synchronization with the transmitter.
Abstract: In general the images used for compression are of
different types like dark image, high intensity image etc. When these
images are compressed using Counter Propagation Neural Network,
it takes longer time to converge. The reason for this is that the given
image may contain a number of distinct gray levels with narrow
difference with their neighborhood pixels. If the gray levels of the
pixels in an image and their neighbors are mapped in such a way that
the difference in the gray levels of the neighbor with the pixel is
minimum, then compression ratio as well as the convergence of the
network can be improved. To achieve this, a Cumulative Distribution
Function is estimated for the image and it is used to map the image
pixels. When the mapped image pixels are used the Counter
Propagation Neural Network yield high compression ratio as well as
it converges quickly.
Abstract: This article presents a short discussion on
optimum neighborhood size selection in a spherical selforganizing
feature map (SOFM). A majority of the literature
on the SOFMs have addressed the issue of selecting optimal
learning parameters in the case of Cartesian topology SOFMs.
However, the use of a Spherical SOFM suggested that the
learning aspects of Cartesian topology SOFM are not directly
translated. This article presents an approach on how to
estimate the neighborhood size of a spherical SOFM based on
the data. It adopts the L-curve criterion, previously suggested
for choosing the regularization parameter on problems of
linear equations where their right-hand-side is contaminated
with noise. Simulation results are presented on two artificial
4D data sets of the coupled Hénon-Ikeda map.
Abstract: Having a very many number of pipelines all over the
country, Iran is one of the countries consists of various ecosystems
with variable degrees of fragility and robusticity as well as
geographical conditions. This study presents a state-of-the-art method
to estimate environmental risks of pipelines by recommending
rational equations including FES, URAS, SRS, RRS, DRS, LURS
and IRS as well as FRS to calculate the risks. This study was carried
out by a relative semi-quantitative approach based on land uses and
HVAs (High-Value Areas). GIS as a tool was used to create proper
maps regarding the environmental risks, land uses and distances. The
main logic for using the formulas was the distance-based approaches
and ESI as well as intersections. Summarizing the results of the
study, a risk geographical map based on the ESIs and final risk score
(FRS) was created. The study results showed that the most sensitive
and so of high risk area would be an area comprising of mangrove
forests located in the pipeline neighborhood. Also, salty lands were
the most robust land use units in the case of pipeline failure
circumstances. Besides, using a state-of-the-art method, it showed
that mapping the risks of pipelines out with the applied method is of
more reliability and convenience as well as relative
comprehensiveness in comparison to present non-holistic methods for
assessing the environmental risks of pipelines. The focus of the
present study is “assessment" than that of “management". It is
suggested that new policies are to be implemented to reduce the
negative effects of the pipeline that has not yet been constructed
completely
Abstract: Thirty three re-wetting tests were conducted at
different combinations of temperatures (5.7- 46.30C) and relative
humidites (48.2-88.6%) with barley. Two most commonly used thinlayer
drying and rewetting models i.e. Page and Diffusion were
compared for their ability to the fit the experimental re-wetting data
based on the standard error of estimate (SEE) of the measured and
simulated moisture contents. The comparison shows both the Page
and Diffusion models fit the re-wetting experimental data of barley
well. The average SEE values for the Page and Diffusion models
were 0.176 % d.b. and 0.199 % d.b., respectively. The Page and
Diffusion models were found to be most suitable equations, to
describe the thin-layer re-wetting characteristics of barley over a
typically five day re-wetting. These two models can be used for the
simulation of deep-bed re-wetting of barley occurring during
ventilated storage and deep bed drying.
Abstract: A self-evolution algorithm for optimizing neural networks using a combination of PSO and JPSO is proposed. The algorithm optimizes both the network topology and parameters simultaneously with the aim of achieving desired accuracy with less complicated networks. The performance of the proposed approach is compared with conventional back-propagation networks using several synthetic functions, with better results in the case of the former. The proposed algorithm is also implemented on slope stability problem to estimate the critical factor of safety. Based on the results obtained, the proposed self evolving network produced a better estimate of critical safety factor in comparison to conventional BPN network.
Abstract: Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks can be washed out with frame temporal filtering (FTF). Switching watermark system or so-called SS-N system has better performance against WER and FTF attacks. In this system, for each frame, the watermark is randomly picked up from a finite pool of watermark patterns. At first SS-N system will be surveyed and then a new collusion attack for SS-N system will be proposed using a new algorithm for separating video frame based on watermark pattern. So N sets will be built in which every set contains frames carrying the same watermark. After that, using WER attack in every set, N different watermark patterns will be estimated and removed later.
Abstract: The necessity of solving multi dimensional
complicated scientific problems beside the necessity of several
objective functions optimization are the most motive reason of born
of artificial intelligence and heuristic methods.
In this paper, we introduce a new method for multiobjective
optimization based on learning automata. In the proposed method,
search space divides into separate hyper-cubes and each cube is
considered as an action. After gathering of all objective functions
with separate weights, the cumulative function is considered as the
fitness function. By the application of all the cubes to the cumulative
function, we calculate the amount of amplification of each action and
the algorithm continues its way to find the best solutions. In this
Method, a lateral memory is used to gather the significant points of
each iteration of the algorithm. Finally, by considering the
domination factor, pareto front is estimated. Results of several
experiments show the effectiveness of this method in comparison
with genetic algorithm based method.