Unscented Transformation for Estimating the Lyapunov Exponents of Chaotic Time Series Corrupted by Random Noise

Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.

Preliminary Chaos Analyses of Explosion Earthquakes Followed by Harmonic Tremors at Semeru Volcano, East Java, Indonesia

Successive event of explosion earthquake and harmonic tremor recorded at Semeru volcano were analyzed to investigate the dynamical system regarding to their eruptive mechanism. The eruptive activity at Semeru volcano East Java, Indonesia is intermittent emission of ash and bombs with Strombolian style which occurred at interval of 15 to 45 minutes. The explosive eruptions accompanied by explosion earthquakes and followed by volcanic tremor which generated by continuous emission of volcanic ash. The spectral and Lyapunov exponent of successive event of explosion and harmonic tremor were analyzed. Peak frequencies of explosion earthquakes range 1.2 to 1.9 Hz and those of the harmonic tremor have peak frequency range 1.5 — 2.2 Hz. The phase space is reconstructed and evaluated based on the Lyapunov exponents. Harmonic tremors have smaller Lyapunov exponent than explosion earthquakes. It can be considerably as correlated complexity of the mechanism from the variance of spectral and fractal dimension and can be concluded that the successive event of harmonic tremor and explosions are chaotic.

Classification and Analysis of Risks in Software Engineering

Despite various methods that exist in software risk management, software projects have a high rate of failure. When complexity and size of the projects are increased, managing software development becomes more difficult. In these projects the need for more analysis and risk assessment is vital. In this paper, a classification for software risks is specified. Then relations between these risks using risk tree structure are presented. Analysis and assessment of these risks are done using probabilistic calculations. This analysis helps qualitative and quantitative assessment of risk of failure. Moreover it can help software risk management process. This classification and risk tree structure can apply to some software tools.

A Signal Driven Adaptive Resolution Short-Time Fourier Transform

The frequency contents of the non-stationary signals vary with time. For proper characterization of such signals, a smart time-frequency representation is necessary. Classically, the STFT (short-time Fourier transform) is employed for this purpose. Its limitation is the fixed timefrequency resolution. To overcome this drawback an enhanced STFT version is devised. It is based on the signal driven sampling scheme, which is named as the cross-level sampling. It can adapt the sampling frequency and the window function (length plus shape) by following the input signal local variations. This adaptation results into the proposed technique appealing features, which are the adaptive time-frequency resolution and the computational efficiency.

Object-Oriented Cognitive-Spatial Complexity Measures

Software maintenance and mainly software comprehension pose the largest costs in the software lifecycle. In order to assess the cost of software comprehension, various complexity measures have been proposed in the literature. This paper proposes new cognitive-spatial complexity measures, which combine the impact of spatial as well as architectural aspect of the software to compute the software complexity. The spatial aspect of the software complexity is taken into account using the lexical distances (in number of lines of code) between different program elements and the architectural aspect of the software complexity is taken into consideration using the cognitive weights of control structures present in control flow of the program. The proposed measures are evaluated using standard axiomatic frameworks and then, the proposed measures are compared with the corresponding existing cognitive complexity measures as well as the spatial complexity measures for object-oriented software. This study establishes that the proposed measures are better indicators of the cognitive effort required for software comprehension than the other existing complexity measures for object-oriented software.

Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph

The main objective developed in this paper is to find a graphic technique for modeling, simulation and diagnosis of the industrial systems. This importance is much apparent when it is about a complex system such as the nuclear reactor with pressurized water of several form with various several non-linearity and time scales. In this case the analytical approach is heavy and does not give a fast idea on the evolution of the system. The tool Bond Graph enabled us to transform the analytical model into graphic model and the software of simulation SYMBOLS 2000 specific to the Bond Graphs made it possible to validate and have the results given by the technical specifications. We introduce the analysis of the problem involved in the faults localization and identification in the complex industrial processes. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new diagnosis approaches to the complex system control. The industrial systems became increasingly complex with the faults diagnosis procedures in the physical systems prove to become very complex as soon as the systems considered are not elementary any more. Indeed, in front of this complexity, we chose to make recourse to Fault Detection and Isolation method (FDI) by the analysis of the problem of its control and to conceive a reliable system of diagnosis making it possible to apprehend the complex dynamic systems spatially distributed applied to the standard pressurized water nuclear reactor.

Inspection of Geometrical Integrity of Work Piece and Measurement of Tool Wear by the Use of Photo Digitizing Method

Considering complexity of products, new geometrical design and investment tolerances that are necessary, measuring and dimensional controlling involve modern and more precise methods. Photo digitizing method using two cameras to record pictures and utilization of conventional method named “cloud points" and data analysis by the use of ATOUS software, is known as modern and efficient in mentioned context. In this paper, benefits of photo digitizing method in evaluating sampling of machining processes have been put forward. For example, assessment of geometrical integrity surface in 5-axis milling process and measurement of carbide tool wear in turning process, can be can be brought forward. Advantages of this method comparing to conventional methods have been expressed.

A Characterized and Optimized Approach for End-to-End Delay Constrained QoS Routing

QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we analyzed two algorithms namely Characterized Delay Constrained Routing (CDCR) and Optimized Delay Constrained Routing (ODCR). The CDCR algorithm dealt an approach for delay constrained routing that captures the trade-off between cost minimization and risk level regarding the delay constraint. The ODCR which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.

A Theory in Optimization of Ad-hoc Routing Algorithms

In this paper optimization of routing in ad-hoc networks is surveyed and a new method for reducing the complexity of routing algorithms is suggested. Using binary matrices for each node in the network and updating it once the routing is done, helps nodes to stop repeating the routing protocols in each data transfer. The algorithm suggested can reduce the complexity of routing to the least amount possible.

Robotic Hands: Design Review and Proposal of New Design Process

In this paper we intend to ascertain the state of the art on multifingered end-effectors, also known as robotic hands or dexterous robot hands, and propose an experimental setup for an innovative task based design approach, involving cutting edge technologies in motion capture. After an initial description of the capabilities and complexity of a human hand when grasping objects, in order to point out the importance of replicating it, we analyze the mechanical and kinematical structure of some important works carried out all around the world in the last three decades and also review the actuators and sensing technologies used. Finally we describe a new design philosophy proposing an experimental setup for the first stage using recent developments in human body motion capture systems that might lead to lighter and always more dexterous robotic hands.

A Model to Determine Atmospheric Stability and its Correlation with CO Concentration

Atmospheric stability plays the most important role in the transport and dispersion of air pollutants. Different methods are used for stability determination with varying degrees of complexity. Most of these methods are based on the relative magnitude of convective and mechanical turbulence in atmospheric motions. Richardson number, Monin-Obukhov length, Pasquill-Gifford stability classification and Pasquill–Turner stability classification, are the most common parameters and methods. The Pasquill–Turner Method (PTM), which is employed in this study, makes use of observations of wind speed, insolation and the time of day to classify atmospheric stability with distinguishable indices. In this study, a model is presented to determination of atmospheric stability conditions using PTM. As a case study, meteorological data of Mehrabad station in Tehran from 2000 to 2005 is applied to model. Here, three different categories are considered to deduce the pattern of stability conditions. First, the total pattern of stability classification is obtained and results show that atmosphere is 38.77%, 27.26%, 33.97%, at stable, neutral and unstable condition, respectively. It is also observed that days are mostly unstable (66.50%) while nights are mostly stable (72.55%). Second, monthly and seasonal patterns are derived and results indicate that relative frequency of stable conditions decrease during January to June and increase during June to December, while results for unstable conditions are exactly in opposite manner. Autumn is the most stable season with relative frequency of 50.69% for stable condition, whilst, it is 42.79%, 34.38% and 27.08% for winter, summer and spring, respectively. Hourly stability pattern is the third category that points out that unstable condition is dominant from approximately 03-15 GTM and 04-12 GTM for warm and cold seasons, respectively. Finally, correlation between atmospheric stability and CO concentration is achieved.

Signal Driven Sampling and Filtering a Promising Approach for Time Varying Signals Processing

The mobile systems are powered by batteries. Reducing the system power consumption is a key to increase its autonomy. It is known that mostly the systems are dealing with time varying signals. Thus, we aim to achieve power efficiency by smartly adapting the system processing activity in accordance with the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting signal driven sampling and processing. In this context, a signal driven filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by analysing the input signal local variations. Thus, it correlates the processing activity with the signal variations. It leads towards a drastic computational gain of the proposed technique compared to the classical one.

Estimation of Skew Angle in Binary Document Images Using Hough Transform

This paper includes two novel techniques for skew estimation of binary document images. These algorithms are based on connected component analysis and Hough transform. Both these methods focus on reducing the amount of input data provided to Hough transform. In the first method, referred as word centroid approach, the centroids of selected words are used for skew detection. In the second method, referred as dilate & thin approach, the selected characters are blocked and dilated to get word blocks and later thinning is applied. The final image fed to Hough transform has the thinned coordinates of word blocks in the image. The methods have been successful in reducing the computational complexity of Hough transform based skew estimation algorithms. Promising experimental results are also provided to prove the effectiveness of the proposed methods.

Extended Least Squares LS–SVM

Among neural models the Support Vector Machine (SVM) solutions are attracting increasing attention, mostly because they eliminate certain crucial questions involved by neural network construction. The main drawback of standard SVM is its high computational complexity, therefore recently a new technique, the Least Squares SVM (LS–SVM) has been introduced. In this paper we present an extended view of the Least Squares Support Vector Regression (LS–SVR), which enables us to develop new formulations and algorithms to this regression technique. Based on manipulating the linear equation set -which embodies all information about the regression in the learning process- some new methods are introduced to simplify the formulations, speed up the calculations and/or provide better results.

A Logic Approach to Database Dynamic Updating

We introduce a logic-based framework for database updating under constraints. In our framework, the constraints are represented as an instantiated extended logic program. When performing an update, database consistency may be violated. We provide an approach of maintaining database consistency, and study the conditions under which the maintenance process is deterministic. We show that the complexity of the computations and decision problems presented in our framework is in each case polynomial time.

Multi Switched Split Vector Quantizer

Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization, This is a hybrid of two product code vector quantization techniques namely the Multi stage vector quantization technique, and Switched split vector quantization technique,. Multi Switched Split Vector Quantization technique quantizes the linear predictive coefficients in terms of line spectral frequencies. From results it is proved that Multi Switched Split Vector Quantization provides better trade off between bitrate and spectral distortion performance, computational complexity and memory requirements when compared to Switched Split Vector Quantization, Multi stage vector quantization, and Split Vector Quantization techniques. By employing the switching technique at each stage of the vector quantizer the spectral distortion, computational complexity and memory requirements were greatly reduced. Spectral distortion was measured in dB, Computational complexity was measured in floating point operations (flops), and memory requirements was measured in (floats).

Dynamic Bayesian Networks Modeling for Inferring Genetic Regulatory Networks by Search Strategy: Comparison between Greedy Hill Climbing and MCMC Methods

Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.

MONARC: A Case Study on Simulation Analysis for LHC Activities

The scale, complexity and worldwide geographical spread of the LHC computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide area network bandwidth available. We present the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large scale distributed systems applied to HEP experiments. We present simulation experiments designed to evaluate the capabilities of the current real-world distributed infrastructure to support existing physics analysis processes and the means by which the experiments bands together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis within the CMS experiment.

Direction of Arrival Estimation Based on a Single Port Smart Antenna Using MUSIC Algorithm with Periodic Signals

A novel direction-of-arrival (DOA) estimation technique, which uses a conventional multiple signal classification (MUSIC) algorithm with periodic signals, is applied to a single RF-port parasitic array antenna for direction finding. Simulation results show that the proposed method gives high resolution (1 degree) DOA estimation in an uncorrelated signal environment. The novelty lies in that the MUSIC algorithm is applied to a simplified antenna configuration. Only one RF port and one analogue-to-digital converter (ADC) are used in this antenna, which features low DC power consumption, low cost, and ease of fabrication. Modifications to the conventional MUSIC algorithm do not bring much additional complexity. The proposed technique is also free from the negative influence by the mutual coupling between elements. Therefore, the technique has great potential to be implemented into the existing wireless mobile communications systems, especially at the power consumption limited mobile terminals, to provide additional position location (PL) services.

Health Assessment of Electronic Products using Mahalanobis Distance and Projection Pursuit Analysis

With increasing complexity in electronic systems there is a need for system level anomaly detection and fault isolation. Anomaly detection based on vector similarity to a training set is used in this paper through two approaches, one the preserves the original information, Mahalanobis Distance (MD), and the other that compresses the data into its principal components, Projection Pursuit Analysis. These methods have been used to detect deviations in system performance from normal operation and for critical parameter isolation in multivariate environments. The study evaluates the detection capability of each approach on a set of test data with known faults against a baseline set of data representative of such “healthy" systems.