Abstract: Despite the availability of natural disaster related time series data for last 110 years, there is no forecasting tool available to humanitarian relief organizations to determine forecasts for emergency logistics planning. This study develops a forecasting tool based on identifying probability distributions. The estimates of the parameters are used to calculate natural disaster forecasts. Further, the determination of aggregate forecasts leads to efficient pre-disaster planning. Based on the research findings, the relief agencies can optimize the various resources allocation in emergency logistics planning.
Abstract: A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.
Abstract: In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.
Abstract: We report a computational study of the spreading
dynamics of a viral infection in a complex (scale-free) network. The
final epidemic size distribution (FESD) was found to be unimodal or
bimodal depending on the value of the basic reproductive
number R0 . The FESDs occurred on time-scales long enough for
intermediate-time epidemic size distributions (IESDs) to be important
for control measures. The usefulness of R0 for deciding on the
timeliness and intensity of control measures was found to be limited
by the multimodal nature of the IESDs and by its inability to inform
on the speed at which the infection spreads through the population. A
reduction of the transmission probability at the hubs of the scale-free
network decreased the occurrence of the larger-sized epidemic events
of the multimodal distributions. For effective epidemic control, an
early reduction in transmission at the index cell and its neighbors was
essential.
Abstract: In this paper we propose an intelligent agent approach
to control the electric power grid at a smaller granularity in order to
give it self-healing capabilities. We develop a method using the
influence model to transform transmission substations into
information processing, analyzing and decision making (intelligent
behavior) units. We also develop a wireless communication method
to deliver real-time uncorrupted information to an intelligent
controller in a power system environment. A combined networking
and information theoretic approach is adopted in meeting both the
delay and error probability requirements. We use a mobile agent
approach in optimizing the achievable information rate vector and in
the distribution of rates to users (sensors). We developed the concept
and the quantitative tools require in the creation of cooperating semiautonomous
subsystems which puts the electric grid on the path
towards intelligent and self-healing system.
Abstract: To support mobility in ATM networks, a number of
technical challenges need to be resolved. The impact of handoff
schemes in terms of service disruption, handoff latency, cost
implications and excess resources required during handoffs needs to
be addressed. In this paper, a one phase handoff and route
optimization solution using reserved PVCs between adjacent ATM
switches to reroute connections during inter-switch handoff is
studied. In the second phase, a distributed optimization process is
initiated to optimally reroute handoff connections. The main
objective is to find the optimal operating point at which to perform
optimization subject to cost constraint with the purpose of reducing
blocking probability of inter-switch handoff calls for delay tolerant
traffic. We examine the relation between the required bandwidth
resources and optimization rate. Also we calculate and study the
handoff blocking probability due to lack of bandwidth for resources
reserved to facilitate the rapid rerouting.
Abstract: Relay based communication has gained considerable importance in the recent years. In this paper we find the end-toend statistics of a two hop non-regenerative relay branch, each hop being Nakagami-m faded. Closed form expressions for the probability density functions of the signal envelope at the output of a selection combiner and a maximal ratio combiner at the destination node are also derived and analytical formulations are verified through computer simulation. These density functions are useful in evaluating the system performance in terms of bit error rate and outage probability.
Abstract: Many Wireless Sensor Network (WSN) applications necessitate secure multicast services for the purpose of broadcasting delay sensitive data like video files and live telecast at fixed time-slot. This work provides a novel method to deal with end-to-end delay and drop rate of packets. Opportunistic Routing chooses a link based on the maximum probability of packet delivery ratio. Null Key Generation helps in authenticating packets to the receiver. Markov Decision Process based Adaptive Scheduling algorithm determines the time slot for packet transmission. Both theoretical analysis and simulation results show that the proposed protocol ensures better performance in terms of packet delivery ratio, average end-to-end delay and normalized routing overhead.
Abstract: The paper deals with calculation of the parameters of
ceramic material from a set of destruction tests of ceramic heads of
total hip joint endoprosthesis. The standard way of calculation of the
material parameters consists in carrying out a set of 3 or 4 point
bending tests of specimens cut out from parts of the ceramic material
to be analysed. In case of ceramic heads, it is not possible to cut out
specimens of required dimensions because the heads are too small (if
the cut out specimens were smaller than the normalised ones, the
material parameters derived from them would exhibit higher strength
values than those which the given ceramic material really has). On
that score, a special testing jig was made, in which 40 heads were
destructed. From the measured values of circumferential strains of the
head-s external spherical surface under destruction, the state of stress
in the head under destruction was established using the final elements
method (FEM). From the values obtained, the sought for parameters
of the ceramic material were calculated using Weibull-s weakest-link
theory.
Abstract: Protection system hidden failures have been identified as one of the main causes of system cascading collapse resulting to power system instability. In this paper, a systematic approach is presented in order to identify the probability of a system cascading collapse by taking into consideration the effect of protection system hidden failure. This includes the accurate calculation of the probability of hidden failure as it will provide significant impinge on the findings of the probability of system cascading collapse. The probability of a system cascading collapse is then used to identify the initial tripping of sensitive transmission lines which will contribute to a critical system cascading collapse. Based on the results obtained from this study, it is important to decide on the accurate value of the hidden failure probability as it will affect the probability of a system cascading collapse.
Abstract: In this paper, a novel scheme is proposed for Ownership Identification and Color Image Authentication by deploying Cryptography & Digital Watermarking. The color image is first transformed from RGB to YST color space exclusively designed for watermarking. Followed by color space transformation, each channel is divided into 4×4 non-overlapping blocks with selection of central 2×2 sub-blocks. Depending upon the channel selected two to three LSBs of each central 2×2 sub-block are set to zero to hold the ownership, authentication and recovery information. The size & position of sub-block is important for correct localization, enhanced security & fast computation. As YS ÔèÑ T so it is suitable to embed the recovery information apart from the ownership and authentication information, therefore 4×4 block of T channel along with ownership information is then deployed by SHA160 to compute the content based hash that is unique and invulnerable to birthday attack or hash collision instead of using MD5 that may raise the condition i.e. H(m)=H(m'). For recovery, intensity mean of 4x4 block of each channel is computed and encoded upto eight bits. For watermark embedding, key based mapping of blocks is performed using 2DTorus Automorphism. Our scheme is oblivious, generates highly imperceptible images with correct localization of tampering within reasonable time and has the ability to recover the original work with probability of near one.
Abstract: Information sharing and gathering are important in the rapid advancement era of technology. The existence of WWW has caused rapid growth of information explosion. Readers are overloaded with too many lengthy text documents in which they are more interested in shorter versions. Oil and gas industry could not escape from this predicament. In this paper, we develop an Automated Text Summarization System known as AutoTextSumm to extract the salient points of oil and gas drilling articles by incorporating statistical approach, keywords identification, synonym words and sentence-s position. In this study, we have conducted interviews with Petroleum Engineering experts and English Language experts to identify the list of most commonly used keywords in the oil and gas drilling domain. The system performance of AutoTextSumm is evaluated using the formulae of precision, recall and F-score. Based on the experimental results, AutoTextSumm has produced satisfactory performance with F-score of 0.81.
Abstract: In this paper, a residue number arithmetic is used in
direct sequence spread spectrum system, this system is evaluated and
the bit error probability of this system is compared to that of non
residue number system. The effect of channel bandwidth, PN
sequences, multipath effect and modulation scheme are studied. A
Matlab program is developed to measure the signal-to-noise ratio
(SNR), and the bit error probability for the various schemes.
Abstract: Interaction Model plays an important role in Modelbased
Intelligent Interface Agent Architecture for developing
Intelligent User Interface. In this paper we are presenting some
improvements in the algorithms for development interaction model of
interface agent including: the action segmentation algorithm, the
action pair selection algorithm, the final action pair selection
algorithm, the interaction graph construction algorithm and the
probability calculation algorithm. The analysis of the algorithms also
presented. At the end of this paper, we introduce an experimental
program called “Personal Transfer System".
Abstract: In this document, we have proposed a robust
conceptual strategy, in order to improve the robustness against the manufacturing defects and thus the reliability of logic CMOS circuits. However, in order to enable the use of future CMOS
technology nodes this strategy combines various types of design:
DFR (Design for Reliability), techniques of tolerance: hardware
redundancy TMR (Triple Modular Redundancy) for hard error
tolerance, the DFT (Design for Testability. The Results on largest ISCAS and ITC benchmark circuits show that our approach improves
considerably the reliability, by reducing the key factors, the area costs and fault tolerance probability.
Abstract: S-boxes (Substitution boxes) are keystones of modern
symmetric cryptosystems (block ciphers, as well as stream ciphers).
S-boxes bring nonlinearity to cryptosystems and strengthen their
cryptographic security. They are used for confusion in data security
An S-box satisfies the strict avalanche criterion (SAC), if and only if
for any single input bit of the S-box, the inversion of it changes each
output bit with probability one half. If a function (cryptographic
transformation) is complete, then each output bit depends on all of
the input bits. Thus, if it were possible to find the simplest Boolean
expression for each output bit in terms of the input bits, each of these
expressions would have to contain all of the input bits if the function
is complete. From some important properties of S-box, the most
interesting property SAC (Strict Avalanche Criterion) is presented
and to analyze this property three analysis methods are proposed.
Abstract: Recent articles have addressed the problem to construct the confidence intervals for the mean of a normal distribution where the parameter space is restricted, see for example Wang [Confidence intervals for the mean of a normal distribution with restricted parameter space. Journal of Statistical Computation and Simulation, Vol. 78, No. 9, 2008, 829–841.], we derived, in this paper, analytic expressions of the coverage probability and the expected length of confidence interval for the normal mean when the whole parameter space is bounded. We also construct the confidence interval for the normal variance with restricted parameter for the first time and its coverage probability and expected length are also mathematically derived. As a result, one can use these criteria to assess the confidence interval for the normal mean and variance when the parameter space is restricted without the back up from simulation experiments.
Abstract: Ground-level tropospheric ozone is one of the air
pollutants of most concern. It is mainly produced by photochemical
processes involving nitrogen oxides and volatile organic compounds
in the lower parts of the atmosphere. Ozone levels become
particularly high in regions close to high ozone precursor emissions
and during summer, when stagnant meteorological conditions with
high insolation and high temperatures are common.
In this work, some results of a study about urban ozone
distribution patterns in the city of Badajoz, which is the largest and
most industrialized city in Extremadura region (southwest Spain) are
shown. Fourteen sampling campaigns, at least one per month, were
carried out to measure ambient air ozone concentrations, during
periods that were selected according to favourable conditions to
ozone production, using an automatic portable analyzer.
Later, to evaluate the ozone distribution at the city, the measured
ozone data were analyzed using geostatistical techniques. Thus, first,
during the exploratory analysis of data, it was revealed that they were
distributed normally, which is a desirable property for the subsequent
stages of the geostatistical study. Secondly, during the structural
analysis of data, theoretical spherical models provided the best fit for
all monthly experimental variograms. The parameters of these
variograms (sill, range and nugget) revealed that the maximum
distance of spatial dependence is between 302-790 m and the
variable, air ozone concentration, is not evenly distributed in reduced
distances. Finally, predictive ozone maps were derived for all points
of the experimental study area, by use of geostatistical algorithms
(kriging). High prediction accuracy was obtained in all cases as
cross-validation showed. Useful information for hazard assessment
was also provided when probability maps, based on kriging
interpolation and kriging standard deviation, were produced.
Abstract: The aim of this paper is to discuss a low-cost methodology that can predict traffic flow conflicts and quantitatively rank crash expectancies (based on relative probability) for various traffic facilities. This paper focuses on the application of statistical distributions to model traffic flow and Monte Carlo techniques to simulate traffic and discusses how to create a tool in order to predict the possibility of a traffic crash. A low-cost data collection methodology has been discussed for the heterogeneous traffic flow that exists and a GIS platform has been proposed to thematically represent traffic flow from simulations and the probability of a crash. Furthermore, discussions have been made to reflect the dynamism of the model in reference to its adaptability, adequacy, economy, and efficiency to ensure adoption.
Abstract: The goal of admission control is to support the Quality
of Service demands of real-time applications via resource reservation
in IP networks. In this paper we introduce a novel Dynamic
Admission Control (DAC) mechanism for IP networks. The DAC
dynamically allocates network resources using the previous network
pattern for each path and uses the dynamic admission algorithm to
improve bandwidth utilization using bandwidth brokers. We evaluate
the performance of the proposed mechanism through trace-driven
simulation experiments in view point of blocking probability,
throughput and normalized utilization.