Abstract: The coverage probability and range of IEEE 802.16
systems depend on different wireless scenarios. Evaluating the
performance of IEEE 802.16 systems over Stanford University
Interim (SUI) channels is suggested by IEEE 802.16 specifications.
In order to derive an effective method for forecasting the coverage
probability and range, this study uses the SUI channel model to
analyze the coverage probability with Rayleigh fading for an IEEE
802.16 system. The BER of the IEEE 802.16 system is shown in the
simulation results. Then, the maximum allowed path loss can be
calculated and substituted into the coverage analysis. Therefore,
simulation results show the coverage range with and without
Rayleigh fading.
Abstract: Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Abstract: The nickel and gold nanoclusters as supported
catalysts were analyzed by XAS, XRD and XPS in order to
determine their local, global and electronic structure. The present
study has pointed out a strong deformation of the local structure of
the metal, due to its interaction with oxide supports. The average
particle size, the mean squares of the microstrain, the particle size
distribution and microstrain functions of the supported Ni and Au
catalysts were determined by XRD method using Generalized Fermi
Function for the X-ray line profiles approximation. Based on EXAFS
analysis we consider that the local structure of the investigated
systems is strongly distorted concerning the atomic number pairs.
Metal-support interaction is confirmed by the shape changes of the
probability densities of electron transitions: Ni K edge (1s →
continuum and 2p), Au LIII-edge (2p3/2 → continuum, 6s, 6d5/2 and
6d3/2). XPS investigations confirm the metal-support interaction at
their interface.
Abstract: With the rapid popularization of internet services, it is apparent that the next generation terrestrial communication systems must be capable of supporting various applications like voice, video, and data. This paper presents the performance evaluation of turbo- coded mobile terrestrial communication systems, which are capable of providing high quality services for delay sensitive (voice or video) and delay tolerant (text transmission) multimedia applications in urban and suburban areas. Different types of multimedia information require different service qualities, which are generally expressed in terms of a maximum acceptable bit-error-rate (BER) and maximum tolerable latency. The breakthrough discovery of turbo codes allows us to significantly reduce the probability of bit errors with feasible latency. In a turbo-coded system, a trade-off between latency and BER results from the choice of convolutional component codes, interleaver type and size, decoding algorithm, and the number of decoding iterations. This trade-off can be exploited for multimedia applications by using optimal and suboptimal performance parameter amalgamations to achieve different service qualities. The results are therefore proposing an adaptive framework for turbo-coded wireless multimedia communications which incorporate a set of performance parameters that achieve an appropriate set of service qualities, depending on the application's requirements.
Abstract: A new fuzzy filter is presented for noise reduction of
images corrupted with additive noise. The filter consists of two
stages. In the first stage, all the pixels of image are processed for
determining noisy pixels. For this, a fuzzy rule based system
associates a degree to each pixel. The degree of a pixel is a real
number in the range [0,1], which denotes a probability that the pixel
is not considered as a noisy pixel. In the second stage, another fuzzy
rule based system is employed. It uses the output of the previous
fuzzy system to perform fuzzy smoothing by weighting the
contributions of neighboring pixel values. Experimental results are
obtained to show the feasibility of the proposed filter. These results
are also compared to other filters by numerical measure and visual
inspection.
Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: In this paper we consider a one-dimensional random
geometric graph process with the inter-nodal gaps evolving according
to an exponential AR(1) process. The transition probability matrix
and stationary distribution are derived for the Markov chains concerning
connectivity and the number of components. We analyze the
algorithm for hitting time regarding disconnectivity. In addition to
dynamical properties, we also study topological properties for static
snapshots. We obtain the degree distributions as well as asymptotic
precise bounds and strong law of large numbers for connectivity
threshold distance and the largest nearest neighbor distance amongst
others. Both exact results and limit theorems are provided in this
paper.
Abstract: Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.
Abstract: We consider a network of two M/M/1 parallel queues having the same poisonnian arrival stream with rate λ. Upon his arrival to the system a customer heads to the shortest queue and stays until being served. If the two queues have the same length, an arriving customer chooses one of the two queues with the same probability. Each duration of service in the two queues is an exponential random variable with rate μ and no jockeying is permitted between the two queues. A new numerical method, based on linear programming and convex optimization, is performed for the computation of the steady state solution of the system.
Abstract: The intention of this study to design the probability optimized sewing sack-s workstation based on ergonomics for productivity improvement and decreasing musculoskeletal disorders. The physical dimensions of two workers were using to design the new workstation. The physical dimensions are (1) sitting height, (2) mid shoulder height sitting, (3) shoulder breadth, (4) knee height, (5) popliteal height, (6) hip breadth and (7) buttock-knee length. The 5th percentile of buttock knee length sitting (51 cm), the 50th percentile of mid shoulder height sitting (62 cm) and the 95th percentile of popliteal height (43 cm) and hip breadth (45 cm) applied to design the workstation for sewing sack-s operator and the others used to adjust the components of this workstation. The risk assessment by RULA before and after using the probability optimized workstation were 7 and 7 scores and REBA scores were 11 and 5, respectively. Body discomfort-abnormal index was used to assess muscle fatigue of operators before adjustment workstation found that neck muscles, arm muscles area, muscles on the back and the lower back muscles fatigue. Therefore, the extension and flexion exercise was applied to relief musculoskeletal stresses. The workers exercised 15 minutes before the beginning and the end of work for 5 days. After that, the capability of flexion and extension muscles- workers were increasing in 3 muscles (arm, leg, and back muscles).
Abstract: In this paper, we consider a multi user multiple input
multiple output (MU-MIMO) based cooperative reporting system for
cognitive radio network. In the reporting network, the secondary
users forward the primary user data to the common fusion center
(FC). The FC is equipped with linear equalizers and an energy
detector to make the decision about the spectrum. The primary user
data are considered to be a digital video broadcasting - terrestrial
(DVB-T) signal. The sensing channel and the reporting channel are
assumed to be an additive white Gaussian noise and an independent
identically distributed Raleigh fading respectively. We analyzed the
detection probability of MU-MIMO system with linear equalizers and
arrived at the closed form expression for average detection
probability. Also the system performance is investigated under
various MIMO scenarios through Monte Carlo simulations.
Abstract: In this paper, our focus is to assure a global frequency synchronization in OFDMA-based wireless mesh networks with local information. To acquire the global synchronization in distributed manner, we propose a novel distributed frequency synchronization (DFS) method. DFS is a method that carrier frequencies of distributed nodes converge to a common value by repetitive estimation and averaging step and sharing step. Experimental results show that DFS achieves noteworthy better synchronization success probability than existing schemes in OFDMA-based mesh networks where the estimation error is presented.
Abstract: This paper argues that increased uncertainty, in certain
situations, may actually encourage investment. Since earlier studies
mostly base their arguments on the assumption of geometric Brownian
motion, the study extends the assumption to alternative stochastic
processes, such as mixed diffusion-jump, mean-reverting process, and
jump amplitude process. A general approach of Monte Carlo
simulation is developed to derive optimal investment trigger for the
situation that the closed-form solution could not be readily obtained
under the assumption of alternative process. The main finding is that
the overall effect of uncertainty on investment is interpreted by the
probability of investing, and the relationship appears to be an invested
U-shaped curve between uncertainty and investment. The implication
is that uncertainty does not always discourage investment even under
several sources of uncertainty. Furthermore, high-risk projects are not
always dominated by low-risk projects because the high-risk projects
may have a positive realization effect on encouraging investment.
Abstract: In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.
Abstract: In present work are considered the scheme of
evaluation the transition probability in quantum system. It is based on
path integral representation of transition probability amplitude and its
evaluation by means of a saddle point method, applied to the part of
integration variables. The whole integration process is reduced to
initial value problem solutions of Hamilton equations with a random
initial phase point. The scheme is related to the semiclassical initial
value representation approaches using great number of trajectories. In
contrast to them from total set of generated phase paths only one path
for each initial coordinate value is selected in Monte Karlo process.
Abstract: Guard channels improve the probability of successful
handoffs by reserving a number of channels exclusively for handoffs.
This concept has the risk of underutilization of radio spectrum due to
the fact that fewer channels are granted to originating calls even if
these guard channels are not always used, when originating calls are
starving for the want of channels. The penalty is the reduction of
total carried traffic. The optimum number of guard channels can help
reduce this problem. This paper presents fuzzy logic based guard
channel scheme wherein guard channels are reorganized on the basis
of traffic density, so that guard channels are provided on need basis.
This will help in incorporating more originating calls and hence high
throughput of the radio spectrum
Abstract: This paper presents an improved variable ordering method to obtain the minimum number of nodes in Reduced Ordered Binary Decision Diagrams (ROBDD). The proposed method uses the graph topology to find the best variable ordering. Therefore the input Boolean function is converted to a unidirectional graph. Three levels of graph parameters are used to increase the probability of having a good variable ordering. The initial level uses the total number of nodes (NN) in all the paths, the total number of paths (NP) and the maximum number of nodes among all paths (MNNAP). The second and third levels use two extra parameters: The shortest path among two variables (SP) and the sum of shortest path from one variable to all the other variables (SSP). A permutation of the graph parameters is performed at each level for each variable order and the number of nodes is recorded. Experimental results are promising; the proposed method is found to be more effective in finding the variable ordering for the majority of benchmark circuits.
Abstract: The paper presents frame and burst acquisition in a satellite communication network based on time division multiple access (TDMA) in which the transmissions may be carried on different transponders. A unique word pattern is used for the acquisition process. The search for the frame is aided by soft-decision of QPSK modulated signals in an additive white Gaussian channel. Results show that when the false alarm rate is low the probability of detection is also low, and the acquisition time is long. Conversely when the false alarm rate is high, the probability of detection is also high and the acquisition time is short. Thus the system operators can trade high false alarm rates for high detection probabilities and shorter acquisition times.
Abstract: The objective of global optimization is to find the
globally best solution of a model. Nonlinear models are ubiquitous
in many applications and their solution often requires a global
search approach; i.e. for a function f from a set A ⊂ Rn to
the real numbers, an element x0 ∈ A is sought-after, such that
∀ x ∈ A : f(x0) ≤ f(x). Depending on the field of application,
the question whether a found solution x0 is not only a local minimum
but a global one is very important.
This article presents a probabilistic approach to determine the
probability of a solution being a global minimum. The approach is
independent of the used global search method and only requires a
limited, convex parameter domain A as well as a Lipschitz continuous
function f whose Lipschitz constant is not needed to be known.
Abstract: In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.