A new Configurable Decimation Filter using Pascal-s Triangle Theorem

This paper presents a new configurable decimation filter for sigma-delta modulators. The filter employs the Pascal-s triangle-s theorem for building the coefficients of non-recursive decimation filters. The filter can be connected to the back-end of various modulators with different output accuracy. In this work two methods are shown and then compared from area occupation viewpoint. First method uses the memory and the second one employs Pascal-s triangle-s method, aiming to reduce required gates. XILINX ISE v10 is used for implementation and confirmation the filter.

Sensitivity of the SHARC Model to Variations of Manning Coefficient and Effect of “n“ on the Sediment Materials Entry into the Eastern Water intake- A Case in the Dez Diversion Weir in Iran

Permanent rivers are the main sources of renewable water supply for the croplands under the irrigation and drainage schemes. They are also the major source of sediment loads transport into the storage reservoirs of the hydro-electrical dams, diversion weirs and regulating dams. Sedimentation process results from soil erosion which is related to poor watershed management and human intervention ion in the hydraulic regime of the rivers. These could change the hydraulic behavior and as such, leads to riverbed and river bank scouring, the consequences of which would be sediment load transport into the dams and therefore reducing the flow discharge in water intakes. The present paper investigate sedimentation process by varying the Manning coefficient "n" by using the SHARC software along the watercourse in the Dez River. Results indicated that the optimum "n" within that river range is 0.0315 at which quantity minimum sediment loads are transported into the Eastern intake. Comparison of the model results with those obtained by those from the SSIIM software within the same river reach showed a very close proximity between them. This suggests a relative accuracy with which the model can simulate the hydraulic flow characteristics and therefore its suitability as a powerful analytical tool for project feasibility studies and project implementation.

Design of the Production Line Based On RFID through 3D Modeling

Radio-frequency identification has entered as a beneficial means with conforming GS1 standards to provide the best solutions in the manufacturing area. It competes with other automated identification technologies e.g. barcodes and smart cards with regard to high speed scanning, reliability and accuracy as well. The purpose of this study is to improve production line-s performance by implementing RFID system in the manufacturing area on the basis of radio-frequency identification (RFID) system by 3D modeling in the program Cinema 4D R13 which provides obvious graphical scenes for users to portray their applications. Finally, with regard to improving system performance, it shows how RFID appears as a well-suited technology in a comparison of the barcode scanner to handle different kinds of raw materials in the production line base on logical process.

Voice Command Recognition System Based on MFCC and VQ Algorithms

The goal of this project is to design a system to recognition voice commands. Most of voice recognition systems contain two main modules as follow “feature extraction" and “feature matching". In this project, MFCC algorithm is used to simulate feature extraction module. Using this algorithm, the cepstral coefficients are calculated on mel frequency scale. VQ (vector quantization) method will be used for reduction of amount of data to decrease computation time. In the feature matching stage Euclidean distance is applied as similarity criterion. Because of high accuracy of used algorithms, the accuracy of this voice command system is high. Using these algorithms, by at least 5 times repetition for each command, in a single training session, and then twice in each testing session zero error rate in recognition of commands is achieved.

A Novel Estimation Method for Integer Frequency Offset in Wireless OFDM Systems

Ren et al. presented an efficient carrier frequency offset (CFO) estimation method for orthogonal frequency division multiplexing (OFDM), which has an estimation range as large as the bandwidth of the OFDM signal and achieves high accuracy without any constraint on the structure of the training sequence. However, its detection probability of the integer frequency offset (IFO) rapidly varies according to the fractional frequency offset (FFO) change. In this paper, we first analyze the Ren-s method and define two criteria suitable for detection of IFO. Then, we propose a novel method for the IFO estimation based on the maximum-likelihood (ML) principle and the detection criteria defined in this paper. The simulation results demonstrate that the proposed method outperforms the Ren-s method in terms of the IFO detection probability irrespective of a value of the FFO.

Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach

The aim of this paper is to present a methodology in three steps to forecast supply chain demand. In first step, various data mining techniques are applied in order to prepare data for entering into forecasting models. In second step, the modeling step, an artificial neural network and support vector machine is presented after defining Mean Absolute Percentage Error index for measuring error. The structure of artificial neural network is selected based on previous researchers' results and in this article the accuracy of network is increased by using sensitivity analysis. The best forecast for classical forecasting methods (Moving Average, Exponential Smoothing, and Exponential Smoothing with Trend) is resulted based on prepared data and this forecast is compared with result of support vector machine and proposed artificial neural network. The results show that artificial neural network can forecast more precisely in comparison with other methods. Finally, forecasting methods' stability is analyzed by using raw data and even the effectiveness of clustering analysis is measured.

Text Summarization for Oil and Gas News Article

Information is increasing in volumes; companies are overloaded with information that they may lose track in getting the intended information. It is a time consuming task to scan through each of the lengthy document. A shorter version of the document which contains only the gist information is more favourable for most information seekers. Therefore, in this paper, we implement a text summarization system to produce a summary that contains gist information of oil and gas news articles. The summarization is intended to provide important information for oil and gas companies to monitor their competitor-s behaviour in enhancing them in formulating business strategies. The system integrated statistical approach with three underlying concepts: keyword occurrences, title of the news article and location of the sentence. The generated summaries were compared with human generated summaries from an oil and gas company. Precision and recall ratio are used to evaluate the accuracy of the generated summary. Based on the experimental results, the system is able to produce an effective summary with the average recall value of 83% at the compression rate of 25%.

Simplified Models to Determine Nodal Voltagesin Problems of Optimal Allocation of Capacitor Banks in Power Distribution Networks

This paper presents two simplified models to determine nodal voltages in power distribution networks. These models allow estimating the impact of the installation of reactive power compensations equipments like fixed or switched capacitor banks. The procedure used to develop the models is similar to the procedure used to develop linear power flow models of transmission lines, which have been widely used in optimization problems of operation planning and system expansion. The steady state non-linear load flow equations are approximated by linear equations relating the voltage amplitude and currents. The approximations of the linear equations are based on the high relationship between line resistance and line reactance (ratio R/X), which is valid for power distribution networks. The performance and accuracy of the models are evaluated through comparisons with the exact results obtained from the solution of the load flow using two test networks: a hypothetical network with 23 nodes and a real network with 217 nodes.

Prediction of a Human Facial Image by ANN using Image Data and its Content on Web Pages

Choosing the right metadata is a critical, as good information (metadata) attached to an image will facilitate its visibility from a pile of other images. The image-s value is enhanced not only by the quality of attached metadata but also by the technique of the search. This study proposes a technique that is simple but efficient to predict a single human image from a website using the basic image data and the embedded metadata of the image-s content appearing on web pages. The result is very encouraging with the prediction accuracy of 95%. This technique may become a great assist to librarians, researchers and many others for automatically and efficiently identifying a set of human images out of a greater set of images.

Implementation of an On-Line PD Measurement System Using HFCT

In order to perform on-line measuring and detection of PD signals, a total solution composing of an HFCT, A/D converter and a complete software package is proposed. The software package includes compensation of HFCT contribution, filtering and noise reduction using wavelet transform and soft calibration routines. The results have shown good performance and high accuracy.

A Comparison among Wolf Pack Search and Four other Optimization Algorithms

The main objective of this paper is applying a comparison between the Wolf Pack Search (WPS) as a newly introduced intelligent algorithm with several other known algorithms including Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL), Binary and Continues Genetic algorithms. All algorithms are applied on two benchmark cost functions. The aim is to identify the best algorithm in terms of more speed and accuracy in finding the solution, where speed is measured in terms of function evaluations. The simulation results show that the SFL algorithm with less function evaluations becomes first if the simulation time is important, while if accuracy is the significant issue, WPS and PSO would have a better performance.

Micro-Controller Based Oxy-Fuel Profile Cutting System

In today-s era of plasma and laser cutting, machines using oxy-acetylene flame are also meritorious due to their simplicity and cost effectiveness. The objective to devise a Computer controlled Oxy-Fuel profile cutting machine arose from the increasing demand for metal cutting with respect to edge quality, circularity and lesser formation of redeposit material. The System has an 8 bit micro controller based embedded system, which assures stipulated time response. A new window based Application software was devised which takes a standard CAD file .DXF as input and converts it into numerical data required for the controller. It uses VB6 as a front end whereas MS-ACCESS and AutoCAD as back end. The system is designed around AT89C51RD2, powerful 8 bit, ISP micro controller from Atmel and is optimized to achieve cost effectiveness and also maintains the required accuracy and reliability for complex shapes. The backbone of the system is a cleverly designed mechanical assembly along with the embedded system resulting in an accuracy of about 10 microns while maintaining perfect linearity in the cut. This results in substantial increase in productivity. The observed results also indicate reduced inter laminar spacing of pearlite with an increase in the hardness of the edge region.

Operation Assay of an Industrial Single-Source – Single-Detector Gamma CT Using MCNP4C Code Simulation and Experimental Test Comparisons

A 3D industrial computed tomography (CT) manufactured based on a first generation CT systems, single-source – single-detector, was evaluated. Operation accuracy assessment of the manufactured system was achieved using simulation in comparison with experimental tests. 137Cs and 60Co were used as a gamma source. Simulations were achieved using MCNP4C code. Experimental tests of 137Cs were in good agreement with the simulations

Neural Networks Learning Improvement using the K-Means Clustering Algorithm to Detect Network Intrusions

In the present work, we propose a new technique to enhance the learning capabilities and reduce the computation intensity of a competitive learning multi-layered neural network using the K-means clustering algorithm. The proposed model use multi-layered network architecture with a back propagation learning mechanism. The K-means algorithm is first applied to the training dataset to reduce the amount of samples to be presented to the neural network, by automatically selecting an optimal set of samples. The obtained results demonstrate that the proposed technique performs exceptionally in terms of both accuracy and computation time when applied to the KDD99 dataset compared to a standard learning schema that use the full dataset.

Rough Set Based Intelligent Welding Quality Classification

The knowledge base of welding defect recognition is essentially incomplete. This characteristic determines that the recognition results do not reflect the actual situation. It also has a further influence on the classification of welding quality. This paper is concerned with the study of a rough set based method to reduce the influence and improve the classification accuracy. At first, a rough set model of welding quality intelligent classification has been built. Both condition and decision attributes have been specified. Later on, groups of the representative multiple compound defects have been chosen from the defect library and then classified correctly to form the decision table. Finally, the redundant information of the decision table has been reducted and the optimal decision rules have been reached. By this method, we are able to reclassify the misclassified defects to the right quality level. Compared with the ordinary ones, this method has higher accuracy and better robustness.

Probabilistic Method of Wind Generation Placement for Congestion Management

Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.

Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Faults Forecasting System

This paper presents Faults Forecasting System (FFS) that utilizes statistical forecasting techniques in analyzing process variables data in order to forecast faults occurrences. FFS is proposing new idea in detecting faults. Current techniques used in faults detection are based on analyzing the current status of the system variables in order to check if the current status is fault or not. FFS is using forecasting techniques to predict future timing for faults before it happens. Proposed model is applying subset modeling strategy and Bayesian approach in order to decrease dimensionality of the process variables and improve faults forecasting accuracy. A practical experiment, designed and implemented in Okayama University, Japan, is implemented, and the comparison shows that our proposed model is showing high forecasting accuracy and BEFORE-TIME.

Evaluation of Protein Digestibility in Canola Meals between Caecectomised and Intact Adult Cockerels

The experiment was conducted to evaluate digestibility quantities of protein in Canola Meals (CMs) between caecectomised and intact adult Rhode Island Red (RIR) cockerels with using conventional addition method (CAM) for 7 d: a 4-d adaptation and a 3-d experiment period on the basis of a completely randomized design with 4 replicates. Results indicated that caecectomy decreased (P

Neuro-Fuzzy Network Based On Extended Kalman Filtering for Financial Time Series

The neural network's performance can be measured by efficiency and accuracy. The major disadvantages of neural network approach are that the generalization capability of neural networks is often significantly low, and it may take a very long time to tune the weights in the net to generate an accurate model for a highly complex and nonlinear systems. This paper presents a novel Neuro-fuzzy architecture based on Extended Kalman filter. To test the performance and applicability of the proposed neuro-fuzzy model, simulation study of nonlinear complex dynamic system is carried out. The proposed method can be applied to an on-line incremental adaptive learning for the prediction of financial time series. A benchmark case studie is used to demonstrate that the proposed model is a superior neuro-fuzzy modeling technique.