Abstract: Wide applicability of concurrent programming
practices in developing various software applications leads to
different concurrency errors amongst which data race is the most
important. Java provides greatest support for concurrent
programming by introducing various concurrency packages. Aspect
oriented programming (AOP) is modern programming paradigm
facilitating the runtime interception of events of interest and can be
effectively used to handle the concurrency problems. AspectJ being
an aspect oriented extension to java facilitates the application of
concepts of AOP for data race detection. Volatile variables are
usually considered thread safe, but they can become the possible
candidates of data races if non-atomic operations are performed
concurrently upon them. Various data race detection algorithms have
been proposed in the past but this issue of volatility and atomicity is
still unaddressed. The aim of this research is to propose some
suggestions for incorporating certain conditions for data race
detection in java programs at the volatile fields by taking into account
support for atomicity in java concurrency packages and making use
of pointcuts. Two simple test programs will demonstrate the results
of research. The results are verified on two different Java
Development Kits (JDKs) for the purpose of comparison.
Abstract: We present a new quadrature rule based on the spline
interpolation along with the error analysis. Moreover, some error
estimates for the reminder when the integrand is either a Lipschitzian
function, a function of bounded variation or a function whose
derivative belongs to Lp are given. We also give some examples
to show that, practically, the spline rule is better than the trapezoidal
rule.
Abstract: Data rate, tolerable bit error rate or frame error rate
and range & coverage are the key performance requirement of a
communication link. In this paper performance of MFSK link is
analyzed in terms of bit error rate, number of errors and total number
of data processed. In the communication link model proposed, which
is implemented using MATLAB block set, an improvement in BER
is observed. Different parameters which effects and enables to keep
BER low in M-ary communication system are also identified.
Abstract: Arbitrarily shaped video objects are an important
concept in modern video coding methods. The techniques presently
used are not based on image elements but rather video objects having
an arbitrary shape. In this paper, spatial shape error concealment
techniques to be used for object-based image in error-prone
environments are proposed. We consider a geometric shape
representation consisting of the object boundary, which can be
extracted from the α-plane. Three different approaches are used to
replace a missing boundary segment: Bézier interpolation, Bézier
approximation and NURBS approximation. Experimental results on
object shape with different concealment difficulty demonstrate the
performance of the proposed methods. Comparisons with proposed
methods are also presented.
Abstract: Religion revival including Islam in Kazakhstan represents reaction, first of all on internal social and political change, events after disintegration of the USSR. Process of revival of Kazakhstan Islam was accompanied as positive, so by negative tendencies. Old mosques were restored, were under construction new, Islamic schools and high schools were created, was widely studied religious the dogmatic person, the corresponding literature was published, expanded contacts with foreign Muslim brothers in the faith, the centers of the Arab-Muslim culture extended. At the same time in Kazakhstan, there are religious-political parties and movements, pursuing radical goals down to change the spiritual and cultural identity of Muslims of Kazakhstan by the forcible introduction of non-traditional religious and political, ethnic and cultural values.
Abstract: The stem cells have ability to differentiated
themselves through mitotic cell division and various range of
specialized cell types. Cellular differentiation is a way by which few
specialized cell develops into more specialized.This paper studies the
fundamental problem of computational schema for an artificial neural
network based on chemical, physical and biological variables of
state. By doing this type of study system could be model for a viable
propagation of various economically important stem cells
differentiation. This paper proposes various differentiation outcomes
of artificial neural network into variety of potential specialized cells
on implementing MATLAB version 2009. A feed-forward back
propagation kind of network was created to input vector (five input
elements) with single hidden layer and one output unit in output
layer. The efficiency of neural network was done by the assessment
of results achieved from this study with that of experimental data
input and chosen target data. The propose solution for the efficiency
of artificial neural network assessed by the comparatative analysis of
“Mean Square Error" at zero epochs. There are different variables of
data in order to test the targeted results.
Abstract: This study has investigated a vehicle Lumped
Parameter Model (LPM) in frontal crash. There are several ways for
determining spring and damper characteristics and type of problem
shall be considered as system identification. This study use Genetic
Algorithm (GA) procedure, being an effective procedure in case of
optimization issues, for optimizing errors, between target data
(experimental data) and calculated results (being obtained by
analytical solving). In this study analyzed model in 5-DOF then
compared our results with 5-DOF serial model. Finally, the response
of model due to external excitement is investigated.
Abstract: There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Abstract: The performance of time-reversal MUSIC algorithm will be dramatically degrades in presence of strong noise and multiple scattering (i.e. when scatterers are close to each other). This is due to error in determining the number of scatterers. The present paper provides a new approach to alleviate such a problem using an information theoretic criterion referred as minimum description length (MDL). The merits of the novel approach are confirmed by the numerical examples. The results indicate the time-reversal MUSIC yields accurate estimate of the target locations with considerable noise and multiple scattering in the received signals.
Abstract: In this paper, the Gaussian type quadrature rules for fuzzy functions are discussed. The errors representation and convergence theorems are given. Moreover, four kinds of Gaussian type quadrature rules with error terms for approximate of fuzzy integrals are presented. The present paper complements the theoretical results of the paper by T. Allahviranloo and M. Otadi [T. Allahviranloo, M. Otadi, Gaussian quadratures for approximate of fuzzy integrals, Applied Mathematics and Computation 170 (2005) 874-885]. The obtained results are illustrated by solving some numerical examples.
Abstract: This paper presented two new efficient algorithms
for contour approximation. The proposed algorithm is compared
with Ramer (good quality), Triangle (faster) and Trapezoid (fastest)
in this work; which are briefly described. Cartesian co-ordinates of
an input contour are processed in such a manner that finally
contours is presented by a set of selected vertices of the edge of the
contour. In the paper the main idea of the analyzed procedures for
contour compression is performed. For comparison, the mean
square error and signal-to-noise ratio criterions are used.
Computational time of analyzed methods is estimated depending on
a number of numerical operations. Experimental results are
obtained both in terms of image quality, compression ratios, and
speed. The main advantages of the analyzed algorithm is small
numbers of the arithmetic operations compared to the existing
algorithms.
Abstract: In this paper an approaches for increasing the
effectiveness of error detection in computer network channels with
Pulse-Amplitude Modulation (PAM) has been proposed. Proposed
approaches are based on consideration of special feature of errors,
which are appearances in line with PAM. The first approach consists
of CRC modification specifically for line with PAM. The second
approach is base of weighted checksums using. The way for
checksum components coding has been developed. It has been shown
that proposed checksum modification ensure superior digital data
control transformation reliability for channels with PAM in compare
to CRC.
Abstract: This study examines causal link between energy use and economic growth for five South Asian countries over period 1971-2006. Panel cointegration, ECM and FMOLS are applied for short and long run estimates. In short run unidirectional causality from per capita GDP to per capita energy consumption is found, but not vice versa. In long run one percent increase in per capita energy consumption tend to decrease 0.13 percent per capita GDP. i.e. Energy use discourage economic growth. This short and long run relationship indicate energy shortage crisis in South Asia due to increased energy use coupled with insufficient energy supply. Beside this long run estimated coefficient of error term suggest that short term adjustment to equilibrium are driven by adjustment back to long run equilibrium. Moreover, per capita energy consumption is responsive to adjustment back to equilibrium and it takes 59 years approximately. It specifies long run feedback between both variables.
Abstract: The submitted paper deals with the problems of
trapping and enriching the gases and aerosols of the substances to be
determined in the ambient atmosphere. Further, the paper is focused
on the working principle of the miniaturized portable continuous
concentrator we have designed and the possibilities of its
application in air sampling and accumulation of organic and
inorganic substances with which the air is contaminated. The stress is
laid on trapping vapours and aerosols of solid substances with the
comparatively low vapour tension such as explosive compounds.
Abstract: We study the performance of compressed beamforming
weights feedback technique in generalized triangular decomposition
(GTD) based MIMO system. GTD is a beamforming technique that
enjoys QoS flexibility. The technique, however, will perform at its
optimum only when the full knowledge of channel state information
(CSI) is available at the transmitter. This would be impossible in
the real system, where there are channel estimation error and limited
feedback. We suggest a way to implement the quantized beamforming
weights feedback, which can significantly reduce the feedback data,
on GTD-based MIMO system and investigate the performance of
the system. Interestingly, we found that compressed beamforming
weights feedback does not degrade the BER performance of the
system at low input power, while the channel estimation error
and quantization do. For comparison, GTD is more sensitive to
compression and quantization, while SVD is more sensitive to the
channel estimation error. We also explore the performance of GTDbased
MU-MIMO system, and find that the BER performance starts
to degrade largely at around -20 dB channel estimation error.
Abstract: In the previous multi-solid models,¤ò approach is
used for the calculation of fugacity in the liquid phase. For the first
time, in the proposed multi-solid thermodynamic model,γ approach
has been used for calculation of fugacity in the liquid mixture.
Therefore, some activity coefficient models have been studied that
the results show that the predictive Wilson model is more appropriate
than others. The results demonstrate γ approach using the predictive
Wilson model is in more agreement with experimental data than the
previous multi-solid models. Also, by this method, generates a new
approach for presenting stability analysis in phase equilibrium
calculations. Meanwhile, the run time in γ approach is less than the
previous methods used ¤ò approach. The results of the new model
present 0.75 AAD % (Average Absolute Deviation) from the
experimental data which is less than the results error of the previous
multi-solid models obviously.
Abstract: Three dimensional analysis of thermal model in laser
full penetration welding, Nd:YAG, by transparent mode DP600 alloy
steel 1.25mm of thickness and gap of 0.1mm. Three models studied
the influence of thermal dependent temperature properties, thermal
independent temperature and the effect of peak value of specific heat
at phase transformation temperature, AC1, on the transient
temperature. Another seven models studied the influence of
discretization, meshes on the temperature distribution in weld plate.
It is shown that for the effects of thermal properties, the errors less
4% of maximum temperature in FZ and HAZ have identified. The
minimum value of discretization are at least one third increment per
radius for temporal discretization and the spatial discretization
requires two elements per radius and four elements through thickness
of the assembled plate, which therefore represent the minimum
requirements of modeling for the laser welding in order to get
minimum errors less than 5% compared to the fine mesh.
Abstract: In this paper, a new encoding algorithm of spectral envelope based on NLMS in G.729.1 for VoIP is proposed. In the TDAC part of G.729.1, the spectral envelope and MDCT coefficients extracted in the weighted CELP coding error (lower-band) and the higher-band input signal are encoded. In order to reduce allocation bits for spectral envelope coding, a new quantization algorithm based on NLMS is proposed. Also, reduced bits are used to enhance sound quality. The performance of the proposed algorithm is evaluated by sound quality and bit reduction rates in clean and frame loss conditions.
Abstract: This paper analyzes the patterns of the Monte Carlo
data for a large number of variables and minterms, in order to
characterize the circuit path length behavior. We propose models
that are determined by training process of shortest path length
derived from a wide range of binary decision diagram (BDD)
simulations. The creation of the model was done use of feed forward
neural network (NN) modeling methodology. Experimental results
for ISCAS benchmark circuits show an RMS error of 0.102 for the
shortest path length complexity estimation predicted by the NN
model (NNM). Use of such a model can help reduce the time
complexity of very large scale integrated (VLSI) circuitries and
related computer-aided design (CAD) tools that use BDDs.
Abstract: The capturing of gel electrophoresis image represents
the output of a DNA computing algorithm. Before this image is being
captured, DNA computing involves parallel overlap assembly (POA)
and polymerase chain reaction (PCR) that is the main of this
computing algorithm. However, the design of the DNA
oligonucleotides to represent a problem is quite complicated and is
prone to errors. In order to reduce these errors during the design stage
before the actual in-vitro experiment is carried out; a simulation
software capable of simulating the POA and PCR processes is
developed. This simulation software capability is unlimited where
problem of any size and complexity can be simulated, thus saving
cost due to possible errors during the design process. Information
regarding the DNA sequence during the computing process as well as
the computing output can be extracted at the same time using the
simulation software.