Analysis of Meteorological Drought Using Standardized Precipitation Index – A Case Study of Puruliya District, West Bengal, India

Drought is universally acknowledged as a phenomenon associated with scarcity of water. The Standardized Precipitation Index (SPI) expresses the actual rainfall as standardized departure from rainfall probability distribution function. In this study severity and spatial pattern of meteorological drought was analyzed in the Puruliya District, West Bengal, India using multi-temporal SPI. Daily gridded data for the period 1971-2005 from 4 rainfall stations surrounding the study area were collected from IMD, Pune, and used in the analysis. Geographic Information System (GIS) was used to generate drought severity maps for the different time scales and months of the year. Temporal SPI graphs show that the maximum SPI value (extreme drought) occurs in station 3 in the year 1993. Mild and moderate droughts occur in the central portion of the study area. Severe and extreme droughts were mostly found in the northeast, northwest and the southwest part of the region.

A New Algorithm for Enhanced Robustness of Copyright Mark

This paper discusses a new heavy tailed distribution based data hiding into discrete cosine transform (DCT) coefficients of image, which provides statistical security as well as robustness against steganalysis attacks. Unlike other data hiding algorithms, the proposed technique does not introduce much effect in the stegoimage-s DCT coefficient probability plots, thus making the presence of hidden data statistically undetectable. In addition the proposed method does not compromise on hiding capacity. When compared to the generic block DCT based data-hiding scheme, our method found more robust against a variety of image manipulating attacks such as filtering, blurring, JPEG compression etc.

Face Localization Using Illumination-dependent Face Model for Visual Speech Recognition

A robust still image face localization algorithm capable of operating in an unconstrained visual environment is proposed. First, construction of a robust skin classifier within a shifted HSV color space is described. Then various filtering operations are performed to better isolate face candidates and mitigate the effect of substantial non-skin regions. Finally, a novel Bhattacharyya-based face detection algorithm is used to compare candidate regions of interest with a unique illumination-dependent face model probability distribution function approximation. Experimental results show a 90% face detection success rate despite the demands of the visually noisy environment.

Proposed a Method for Increasing the Delivery Performance in Dynamic Supply Network

Supply network management adopts a systematic and integrative approach to managing the operations and relationships of various parties in a supply network. The objective of the manufactures in their supply network is to reduce inventory costs and increase customer satisfaction levels. One way of doing that is to synchronize delivery performance. A supply network can be described by nodes representing the companies and the links (relationships) between these nodes. Uncertainty in delivery time depends on type of network relationship between suppliers. The problem is to understand how the individual uncertainties influence the total uncertainty of the network and identify those parts of the network, which has the highest potential for improving the total delivery time uncertainty.

Order Statistics-based “Anti-Bayesian“ Parametric Classification for Asymmetric Distributions in the Exponential Family

Although the field of parametric Pattern Recognition (PR) has been thoroughly studied for over five decades, the use of the Order Statistics (OS) of the distributions to achieve this has not been reported. The pioneering work on using OS for classification was presented in [1] for the Uniform distribution, where it was shown that optimal PR can be achieved in a counter-intuitive manner, diametrically opposed to the Bayesian paradigm, i.e., by comparing the testing sample to a few samples distant from the mean. This must be contrasted with the Bayesian paradigm in which, if we are allowed to compare the testing sample with only a single point in the feature space from each class, the optimal strategy would be to achieve this based on the (Mahalanobis) distance from the corresponding central points, for example, the means. In [2], we showed that the results could be extended for a few symmetric distributions within the exponential family. In this paper, we attempt to extend these results significantly by considering asymmetric distributions within the exponential family, for some of which even the closed form expressions of the cumulative distribution functions are not available. These distributions include the Rayleigh, Gamma and certain Beta distributions. As in [1] and [2], the new scheme, referred to as Classification by Moments of Order Statistics (CMOS), attains an accuracy very close to the optimal Bayes’ bound, as has been shown both theoretically and by rigorous experimental testing.

Application of Pearson Parametric Distribution Model in Fatigue Life Reliability Evaluation

The aim of this paper is to introduce a parametric distribution model in fatigue life reliability analysis dealing with variation in material properties. Service loads in terms of responsetime history signal of Belgian pave were replicated on a multi-axial spindle coupled road simulator and stress-life method was used to estimate the fatigue life of automotive stub axle. A PSN curve was obtained by monotonic tension test and two-parameter Weibull distribution function was used to acquire the mean life of the component. A Pearson system was developed to evaluate the fatigue life reliability by considering stress range intercept and slope of the PSN curve as random variables. Considering normal distribution of fatigue strength, it is found that the fatigue life of the stub axle to have the highest reliability between 10000 – 15000 cycles. Taking into account the variation of material properties associated with the size effect, machining and manufacturing conditions, the method described in this study can be effectively applied in determination of probability of failure of mass-produced parts.

Probabilistic Model Development for Project Performance Forecasting

In this paper, based on the past project cost and time performance, a model for forecasting project cost performance is developed. This study presents a probabilistic project control concept to assure an acceptable forecast of project cost performance. In this concept project activities are classified into sub-groups entitled control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for each sub-group and the project SS-Curve is obtained by summing sub-groups- SS-Curves. In this model, project cost uncertainties are considered through Beta distribution functions of the project activities costs required to complete the project at every selected time sections through project accomplishment, which are extracted from a variety of sources. Based on this model, after a percentage of the project progress, the project performance is measured via Earned Value Management to adjust the primary cost probability distribution functions. Then, accordingly the future project cost performance is predicted by using the Monte-Carlo simulation method.

Time-Domain Stator Current Condition Monitoring: Analyzing Point Failures Detection by Kolmogorov-Smirnov (K-S) Test

This paper deals with condition monitoring of electric switch machine for railway points. Point machine, as a complex electro-mechanical device, switch the track between two alternative routes. There has been an increasing interest in railway safety and the optimal management of railway equipments maintenance, e.g. point machine, in order to enhance railway service quality and reduce system failure. This paper explores the development of Kolmogorov- Smirnov (K-S) test to detect some point failures (external to the machine, slide chairs, fixing, stretchers, etc), while the point machine (inside the machine) is in its proper condition. Time-domain stator Current signatures of normal (healthy) and faulty points are taken by 3 Hall Effect sensors and are analyzed by K-S test. The test is simulated by creating three types of such failures, namely putting a hard stone and a soft stone between stock rail and switch blades as obstacles and also slide chairs- friction. The test has been applied for those three faults which the results show that K-S test can effectively be developed for the aim of other point failures detection, which their current signatures deviate parametrically from the healthy current signature. K-S test as an analysis technique, assuming that any defect has a specific probability distribution. Empirical cumulative distribution functions (ECDF) are used to differentiate these probability distributions. This test works based on the null hypothesis that ECDF of target distribution is statistically similar to ECDF of reference distribution. Therefore by comparing a given current signature (as target signal) from unknown switch state to a number of template signatures (as reference signal) from known switch states, it is possible to identify which is the most likely state of the point machine under analysis.

Structural Modelling of the LiCl Aqueous Solution: Using the Hybrid Reverse Monte Carlo (HRMC) Simulation

The Reverse Monte Carlo (RMC) simulation is applied in the study of an aqueous electrolyte LiCl6H2O. On the basis of the available experimental neutron scattering data, RMC computes pair radial distribution functions in order to explore the structural features of the system. The obtained results include some unrealistic features. To overcome this problem, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an energy constraint in addition to the commonly used constraints derived from experimental data. Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in pair partial distribution curves. This kind of study can be considered as a useful test for a defined interaction model for conventional simulation techniques.

A Method for Modeling Multiple Antenna Channels

In this paper we propose a method for modeling the correlation between the received signals by two or more antennas operating in a multipath environment. Considering the maximum excess delay in the channel being modeled, an elliptical region surrounding both transmitter and receiver antennas is produced. A number of scatterers are randomly distributed in this region and scatter the incoming waves. The amplitude and phase of incoming waves are computed and used to obtain statistical properties of the received signals. This model has the distinguishable advantage of being applicable for any configuration of antennas. Furthermore the common PDF (Probability Distribution Function) of received wave amplitudes for any pair of antennas can be calculated and used to produce statistical parameters of received signals.

On the Comparison of Several Goodness of Fit tests under Simple Random Sampling and Ranked Set Sampling

Many works have been carried out to compare the efficiency of several goodness of fit procedures for identifying whether or not a particular distribution could adequately explain a data set. In this paper a study is conducted to investigate the power of several goodness of fit tests such as Kolmogorov Smirnov (KS), Anderson-Darling(AD), Cramer- von- Mises (CV) and a proposed modification of Kolmogorov-Smirnov goodness of fit test which incorporates a variance stabilizing transformation (FKS). The performances of these selected tests are studied under simple random sampling (SRS) and Ranked Set Sampling (RSS). This study shows that, in general, the Anderson-Darling (AD) test performs better than other GOF tests. However, there are some cases where the proposed test can perform as equally good as the AD test.

On Finite Wordlength Properties of Block-Floating-Point Arithmetic

A special case of floating point data representation is block floating point format where a block of operands are forced to have a joint exponent term. This paper deals with the finite wordlength properties of this data format. The theoretical errors associated with the error model for block floating point quantization process is investigated with the help of error distribution functions. A fast and easy approximation formula for calculating signal-to-noise ratio in quantization to block floating point format is derived. This representation is found to be a useful compromise between fixed point and floating point format due to its acceptable numerical error properties over a wide dynamic range.

A Robust Reception of IEEE 802.15.4a IR-TH UWB in Dense Multipath and Gaussian Noise

IEEE 802.15.4a impulse radio-time hopping ultra wide band (IR-TH UWB) physical layer, due to small duty cycle and very short pulse widths is robust against multipath propagation. However, scattering and reflections with the large number of obstacles in indoor channel environments, give rise to dense multipath fading. It imposes serious problem to optimum Rake receiver architectures, for which very large number of fingers are needed. Presence of strong noise also affects the reception of fine pulses having extremely low power spectral density. A robust SRake receiver for IEEE 802.15.4a IRTH UWB in dense multipath and additive white Gaussian noise (AWGN) is proposed to efficiently recover the weak signals with much reduced complexity. It adaptively increases the signal to noise (SNR) by decreasing noise through a recursive least square (RLS) algorithm. For simulation, dense multipath environment of IEEE 802.15.4a industrial non line of sight (NLOS) is employed. The power delay profile (PDF) and the cumulative distribution function (CDF) for the respective channel environment are found. Moreover, the error performance of the proposed architecture is evaluated in comparison with conventional SRake and AWGN correlation receivers. The simulation results indicate a substantial performance improvement with very less number of Rake fingers.

Moment Generating Functions of Observed Gaps between Hypopnea Using Saddlepoint Approximations

Saddlepoint approximations is one of the tools to obtain an expressions for densities and distribution functions. We approximate the densities of the observed gaps between the hypopnea events using the Huzurbazar saddlepoint approximation. We demonstrate the density of a maximum likelihood estimator in exponential families.

Statistical Reliability Based Modeling of Series and Parallel Operating Systems using Extreme Value Theory

This paper tries to represent a new method for computing the reliability of a system which is arranged in series or parallel model. In this method we estimate life distribution function of whole structure using the asymptotic Extreme Value (EV) distribution of Type I, or Gumbel theory. We use EV distribution in minimal mode, for estimate the life distribution function of series structure and maximal mode for parallel system. All parameters also are estimated by Moments method. Reliability function and failure (hazard) rate and p-th percentile point of each function are determined. Other important indexes such as Mean Time to Failure (MTTF), Mean Time to repair (MTTR), for non-repairable and renewal systems in both of series and parallel structure will be computed.

Image Compression with Back-Propagation Neural Network using Cumulative Distribution Function

Image Compression using Artificial Neural Networks is a topic where research is being carried out in various directions towards achieving a generalized and economical network. Feedforward Networks using Back propagation Algorithm adopting the method of steepest descent for error minimization is popular and widely adopted and is directly applied to image compression. Various research works are directed towards achieving quick convergence of the network without loss of quality of the restored image. In general the images used for compression are of different types like dark image, high intensity image etc. When these images are compressed using Back-propagation Network, it takes longer time to converge. The reason for this is, the given image may contain a number of distinct gray levels with narrow difference with their neighborhood pixels. If the gray levels of the pixels in an image and their neighbors are mapped in such a way that the difference in the gray levels of the neighbors with the pixel is minimum, then compression ratio as well as the convergence of the network can be improved. To achieve this, a Cumulative distribution function is estimated for the image and it is used to map the image pixels. When the mapped image pixels are used, the Back-propagation Neural Network yields high compression ratio as well as it converges quickly.

Software Reliability Prediction Model Analysis

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Design of Auto Exposure Unit Based On 2-Way Histogram Equalization

Histogram equalization is often used in image enhancement, but it can be also used in auto exposure. However, conventional histogram equalization does not work well when many pixels are concentrated in a narrow luminance range.This paper proposes an auto exposure method based on 2-way histogram equalization. Two cumulative distribution functions are used, where one is from dark to bright and the other is from bright to dark. In this paper, the proposed auto exposure method is also designed and implemented for image signal processors with full-HD images.

Probabilistic Characteristics of older PR Frames in the Mid-America Earthquake Region

Probabilistic characteristics of seismic responses of the Partially Restrained connection rotation (PRCR) and panel zone deformation (PZD) installed in older steel moment frames were investigated in accordance with statistical inference in decision-making process. The 4, 6 and 8 story older steel moment frames with clip angle and T-stub connections were designed and analyzed using 2%/50yrs ground motions in four cities of the Mid-America earthquake region. The probability density function and cumulative distribution function of PRCR and PZD were determined by the goodness-of-fit tests based on probabilistic parameters measured from the results of the nonlinear time-history analyses. The obtained probabilistic parameters and distributions can be used to find out what performance level mainly PR connections and panel zones satisfy and how many PR connections and panel zones experience a serious damage under the Mid-America ground motions.

Screened Potential in a Reverse Monte Carlo (RMC) Simulation

A structural study of an aqueous electrolyte whose experimental results are available. It is a solution of LiCl-6H2O type at glassy state (120K) contrasted with pure water at room temperature by means of Partial Distribution Functions (PDF) issue from neutron scattering technique. Based on these partial functions, the Reverse Monte Carlo method (RMC) computes radial and angular correlation functions which allow exploring a number of structural features of the system. The obtained curves include some artifacts. To remedy this, we propose to introduce a screened potential as an additional constraint. Obtained results show a good matching between experimental and computed functions and a significant improvement in PDFs curves with potential constraint. It suggests an efficient fit of pair distribution functions curves.