Distributed Estimation Using an Improved Incremental Distributed LMS Algorithm

In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.

The Spiral_OWL Model – Towards Spiral Knowledge Engineering

The Spiral development model has been used successfully in many commercial systems and in a good number of defense systems. This is due to the fact that cost-effective incremental commitment of funds, via an analogy of the spiral model to stud poker and also can be used to develop hardware or integrate software, hardware, and systems. To support adaptive, semantic collaboration between domain experts and knowledge engineers, a new knowledge engineering process, called Spiral_OWL is proposed. This model is based on the idea of iterative refinement, annotation and structuring of knowledge base. The Spiral_OWL model is generated base on spiral model and knowledge engineering methodology. A central paradigm for Spiral_OWL model is the concentration on risk-driven determination of knowledge engineering process. The collaboration aspect comes into play during knowledge acquisition and knowledge validation phase. Design rationales for the Spiral_OWL model are to be easy-to-implement, well-organized, and iterative development cycle as an expanding spiral.

STRPRO Tool for Manipulation of Stratified Programs Based on SEPN

Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. SEPN nets are well adapted extension of predicate nets for the definition and manipulation of stratified programs. This formalism is characterized by two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimization (maximal stratification, incremental updates ...). We propose, in this paper, useful algorithms for manipulating stratified programs using SEPN. These algorithms were implemented and validated with STRPRO tool.

Assessment of the Adaptive Pushover Analysis Using Displacement-based Loading in Prediction the Seismic Behaviour of the Unsymmetric-Plan Buildings

The recent drive for use of performance-based methodologies in design and assessment of structures in seismic areas has significantly increased the demand for the development of reliable nonlinear inelastic static pushover analysis tools. As a result, the adaptive pushover methods have been developed during the last decade, which unlike their conventional pushover counterparts, feature the ability to account for the effect that higher modes of vibration and progressive stiffness degradation might have on the distribution of seismic storey forces. Even in advanced pushover methods, little attention has been paid to the Unsymmetric structures. This study evaluates the seismic demands for three dimensional Unsymmetric-Plan buildings determined by the Displacement-based Adaptive Pushover (DAP) analysis, which has been introduced by Antoniou and Pinho [2004]. The capability of DAP procedure in capturing the torsional effects due to the irregularities of the structures, is investigated by comparing its estimates to the exact results, obtained from Incremental Dynamic Analysis (IDA). Also the capability of the procedure in prediction the seismic behaviour of the structure is discussed.

Kaikaku - Radical Improvement in Production

Considering today-s increasing speed of change, radical and innovative improvement - Kaikaku, is a necessity parallel to continuous incremental improvement - Kaizen, especially for SME-s in order to attain the competitive edge needed to be profitable. During 2011, a qualitative single case study with the objective of realizing a kaikaku in production has been conducted. The case study was run as a one year project using a collaborative approach including both researchers and company representatives. The case study was conducted with the purpose of gaining further knowledge about kaikaku realization as well as its implications. The empirical results provide insights about the great productivity results achieved by applying a specific kaikaku realization approach. However, it also sheds light on the difficulty and contradiction of combining innovation management and production system development.

Identification of Nonlinear Predictor and Simulator Models of a Cement Rotary Kiln by Locally Linear Neuro-Fuzzy Technique

One of the most important parts of a cement factory is the cement rotary kiln which plays a key role in quality and quantity of produced cement. In this part, the physical exertion and bilateral movement of air and materials, together with chemical reactions take place. Thus, this system has immensely complex and nonlinear dynamic equations. These equations have not worked out yet. Only in exceptional case; however, a large number of the involved parameter were crossed out and an approximation model was presented instead. This issue caused many problems for designing a cement rotary kiln controller. In this paper, we presented nonlinear predictor and simulator models for a real cement rotary kiln by using nonlinear identification technique on the Locally Linear Neuro- Fuzzy (LLNF) model. For the first time, a simulator model as well as a predictor one with a precise fifteen minute prediction horizon for a cement rotary kiln is presented. These models are trained by LOLIMOT algorithm which is an incremental tree-structure algorithm. At the end, the characteristics of these models are expressed. Furthermore, we presented the pros and cons of these models. The data collected from White Saveh Cement Company is used for modeling.

Approximation Incremental Training Algorithm Based on a Changeable Training Set

The quick training algorithms and accurate solution procedure for incremental learning aim at improving the efficiency of training of SVR, whereas there are some disadvantages for them, i.e. the nonconvergence of the formers for changeable training set and the inefficiency of the latter for a massive dataset. In order to handle the problems, a new training algorithm for a changeable training set, named Approximation Incremental Training Algorithm (AITA), was proposed. This paper explored the reason of nonconvergence theoretically and discussed the realization of AITA, and finally demonstrated the benefits of AITA both on precision and efficiency.

Reliability-based Selection of Wind Turbines for Large-Scale Wind Farms

This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.

Inferring Hierarchical Pronunciation Rules from a Phonetic Dictionary

This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.

Neuro-Fuzzy Network Based On Extended Kalman Filtering for Financial Time Series

The neural network's performance can be measured by efficiency and accuracy. The major disadvantages of neural network approach are that the generalization capability of neural networks is often significantly low, and it may take a very long time to tune the weights in the net to generate an accurate model for a highly complex and nonlinear systems. This paper presents a novel Neuro-fuzzy architecture based on Extended Kalman filter. To test the performance and applicability of the proposed neuro-fuzzy model, simulation study of nonlinear complex dynamic system is carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction of financial time series. A benchmark case studie is used to demonstrate that the proposed model is a superior neuro-fuzzy modeling technique.

Kernel Matching versus Inverse Probability Weighting: A Comparative Study

Recent quasi-experimental evaluation of the Canadian Active Labour Market Policies (ALMP) by Human Resources and Skills Development Canada (HRSDC) has provided an opportunity to examine alternative methods to estimating the incremental effects of Employment Benefits and Support Measures (EBSMs) on program participants. The focus of this paper is to assess the efficiency and robustness of inverse probability weighting (IPW) relative to kernel matching (KM) in the estimation of program effects. To accomplish this objective, the authors compare pairs of 1,080 estimates, along with their associated standard errors, to assess which type of estimate is generally more efficient and robust. In the interest of practicality, the authorsalso document the computationaltime it took to produce the IPW and KM estimates, respectively.

A Research about How the Dividend Policy Influences the Enterprise Value on the Condition of Consecutive Cash Payoff

this article conducts a research about the relationship between cash dividend policy and enterprise value based on the data coming from the A-share listed companies over period 2005-2009. In conclusion, the enterprise value has a negative correlation with the incremental and the degressive cash dividend per share, and has a positive correlation with the stable cash dividend per share.

Identification, Prediction and Detection of the Process Fault in a Cement Rotary Kiln by Locally Linear Neuro-Fuzzy Technique

In this paper, we use nonlinear system identification method to predict and detect process fault of a cement rotary kiln. After selecting proper inputs and output, an input-output model is identified for the plant. To identify the various operation points in the kiln, Locally Linear Neuro-Fuzzy (LLNF) model is used. This model is trained by LOLIMOT algorithm which is an incremental treestructure algorithm. Then, by using this method, we obtained 3 distinct models for the normal and faulty situations in the kiln. One of the models is for normal condition of the kiln with 15 minutes prediction horizon. The other two models are for the two faulty situations in the kiln with 7 minutes prediction horizon are presented. At the end, we detect these faults in validation data. The data collected from White Saveh Cement Company is used for in this study.

Artificial Neural Network Prediction for Coke Strength after Reaction and Data Analysis

In this paper, the requirement for Coke quality prediction, its role in Blast furnaces, and the model output is explained. By applying method of Artificial Neural Networking (ANN) using back propagation (BP) algorithm, prediction model has been developed to predict CSR. Important blast furnace functions such as permeability, heat exchanging, melting, and reducing capacity are mostly connected to coke quality. Coke quality is further dependent upon coal characterization and coke making process parameters. The ANN model developed is a useful tool for process experts to adjust the control parameters in case of coke quality deviations. The model also makes it possible to predict CSR for new coal blends which are yet to be used in Coke Plant. Input data to the model was structured into 3 modules, for tenure of past 2 years and the incremental models thus developed assists in identifying the group causing the deviation of CSR.

Optimal Solution of Constraint Satisfaction Problems

An optimal solution for a large number of constraint satisfaction problems can be found using the technique of substitution and elimination of variables analogous to the technique that is used to solve systems of equations. A decision function f(A)=max(A2) is used to determine which variables to eliminate. The algorithm can be expressed in six lines and is remarkable in both its simplicity and its ability to find an optimal solution. However it is inefficient in that it needs to square the updated A matrix after each variable elimination. To overcome this inefficiency the algorithm is analyzed and it is shown that the A matrix only needs to be squared once at the first step of the algorithm and then incrementally updated for subsequent steps, resulting in significant improvement and an algorithm complexity of O(n3).

Empirical Evidence on Equity Valuation of Thai Firms

This study aims at providing empirical evidence on a comparison of two equity valuation models: (1) the dividend discount model (DDM) and (2) the residual income model (RIM), in estimating equity values of Thai firms during 1995-2004. Results suggest that DDM and RIM underestimate equity values of Thai firms and that RIM outperforms DDM in predicting cross-sectional stock prices. Results on regression of cross-sectional stock prices on the decomposed DDM and RIM equity values indicate that book value of equity provides the greatest incremental explanatory power, relative to other components in DDM and RIM terminal values, suggesting that book value distortions resulting from accounting procedures and choices are less severe than forecast and measurement errors in discount rates and growth rates. We also document that the incremental explanatory power of book value of equity during 1998-2004, representing the information environment under Thai Accounting Standards reformed after the 1997 economic crisis to conform to International Accounting Standards, is significantly greater than that during 1995-1996, representing the information environment under the pre-reformed Thai Accounting Standards. This implies that the book value distortions are less severe under the 1997 Reformed Thai Accounting Standards than the pre-reformed Thai Accounting Standards.

Recognition Machine (RM) for On-line and Isolated Flight Deck Officer (FDO) Gestures

The paper presents an on-line recognition machine (RM) for continuous/isolated, dynamic and static gestures that arise in Flight Deck Officer (FDO) training. RM is based on generic pattern recognition framework. Gestures are represented as templates using summary statistics. The proposed recognition algorithm exploits temporal and spatial characteristics of gestures via dynamic programming and Markovian process. The algorithm predicts corresponding index of incremental input data in the templates in an on-line mode. Accumulated consistency in the sequence of prediction provides a similarity measurement (Score) between input data and the templates. The algorithm provides an intuitive mechanism for automatic detection of start/end frames of continuous gestures. In the present paper, we consider isolated gestures. The performance of RM is evaluated using four datasets - artificial (W TTest), hand motion (Yang) and FDO (tracker, vision-based ). RM achieves comparable results which are in agreement with other on-line and off-line algorithms such as hidden Markov model (HMM) and dynamic time warping (DTW). The proposed algorithm has the additional advantage of providing timely feedback for training purposes.

Connectionist Approach to Generic Text Summarization

As the enormous amount of on-line text grows on the World-Wide Web, the development of methods for automatically summarizing this text becomes more important. The primary goal of this research is to create an efficient tool that is able to summarize large documents automatically. We propose an Evolving connectionist System that is adaptive, incremental learning and knowledge representation system that evolves its structure and functionality. In this paper, we propose a novel approach for Part of Speech disambiguation using a recurrent neural network, a paradigm capable of dealing with sequential data. We observed that connectionist approach to text summarization has a natural way of learning grammatical structures through experience. Experimental results show that our approach achieves acceptable performance.

A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)

Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.

High Level Synthesis of Kahn Process Networks(KPN) for Streaming Applications

Streaming Applications usually run in parallel or in series that incrementally transform a stream of input data. It poses a design challenge to break such an application into distinguishable blocks and then to map them into independent hardware processing elements. For this, there is required a generic controller that automatically maps such a stream of data into independent processing elements without any dependencies and manual considerations. In this paper, Kahn Process Networks (KPN) for such streaming applications is designed and developed that will be mapped on MPSoC. This is designed in such a way that there is a generic Cbased compiler that will take the mapping specifications as an input from the user and then it will automate these design constraints and automatically generate the synthesized RTL optimized code for specified application.