A Simple Approach of Three phase Distribution System Modeling for Power Flow Calculations

This paper presents a simple three phase power flow method for solution of three-phase unbalanced radial distribution system (RDN) with voltage dependent loads. It solves a simple algebraic recursive expression of voltage magnitude, and all the data are stored in vector form. The algorithm uses basic principles of circuit theory and can be easily understood. Mutual coupling between the phases has been included in the mathematical model. The proposed algorithm has been tested with several unbalanced radial distribution networks and the results are presented in the article. 8- bus and IEEE 13 bus unbalanced radial distribution system results are in agreements with the literature and show that the proposed model is valid and reliable.

Fault Detection via Stability Analysis for the Hybrid Control Unit of HEVs

Fault detection determines faultexistence and detecting time. This paper discusses two layered fault detection methods to enhance the reliability and safety. Two layered fault detection methods consist of fault detection methods of component level controllers and system level controllers. Component level controllers detect faults by using limit checking, model-based detection, and data-driven detection and system level controllers execute detection by stability analysis which can detect unknown changes. System level controllers compare detection results via stability with fault signals from lower level controllers. This paper addresses fault detection methods via stability and suggests fault detection criteria in nonlinear systems. The fault detection method applies tothe hybrid control unit of a military hybrid electric vehicleso that the hybrid control unit can detect faults of the traction motor.

Efficient Method for ECG Compression Using Two Dimensional Multiwavelet Transform

In this paper we introduce an effective ECG compression algorithm based on two dimensional multiwavelet transform. Multiwavelets offer simultaneous orthogonality, symmetry and short support, which is not possible with scalar two-channel wavelet systems. These features are known to be important in signal processing. Thus multiwavelet offers the possibility of superior performance for image processing applications. The SPIHT algorithm has achieved notable success in still image coding. We suggested applying SPIHT algorithm to 2-D multiwavelet transform of2-D arranged ECG signals. Experiments on selected records of ECG from MIT-BIH arrhythmia database revealed that the proposed algorithm is significantly more efficient in comparison with previously proposed ECG compression schemes.

Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data

Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.

An Adaptive Hand-Talking System for the Hearing Impaired

An adaptive Chinese hand-talking system is presented in this paper. By analyzing the 3 data collecting strategies for new users, the adaptation framework including supervised and unsupervised adaptation methods is proposed. For supervised adaptation, affinity propagation (AP) is used to extract exemplar subsets, and enhanced maximum a posteriori / vector field smoothing (eMAP/VFS) is proposed to pool the adaptation data among different models. For unsupervised adaptation, polynomial segment models (PSMs) are used to help hidden Markov models (HMMs) to accurately label the unlabeled data, then the "labeled" data together with signerindependent models are inputted to MAP algorithm to generate signer-adapted models. Experimental results show that the proposed framework can execute both supervised adaptation with small amount of labeled data and unsupervised adaptation with large amount of unlabeled data to tailor the original models, and both achieve improvements on the performance of recognition rate.

Stroke Extraction and Approximation with Interpolating Lagrange Curves

This paper proposes a stroke extraction method for use in off-line signature verification. After giving a brief overview of the current ongoing researches an algorithm is introduced for detecting and following strokes in static images of signatures. Problems like the handling of junctions and variations in line width and line intensity are discussed in detail. Results are validated by both using an existing on-line signature database and by employing image registration methods.

Feature Selection Approaches with Missing Values Handling for Data Mining - A Case Study of Heart Failure Dataset

In this paper, we investigated the characteristic of a clinical dataseton the feature selection and classification measurements which deal with missing values problem.And also posed the appropriated techniques to achieve the aim of the activity; in this research aims to find features that have high effect to mortality and mortality time frame. We quantify the complexity of a clinical dataset. According to the complexity of the dataset, we proposed the data mining processto cope their complexity; missing values, high dimensionality, and the prediction problem by using the methods of missing value replacement, feature selection, and classification.The experimental results will extend to develop the prediction model for cardiology.

Heat Transfer Enhancement Studies in a Circular Tube Fitted with Right-Left Helical Inserts with Spacer

Experimental investigation of heat transfer and friction factor characteristics of circular tube fitted with 300 right-left helical screw inserts with 100 mm spacer of different twist ratio has been presented for laminar and turbulent flow.. The experimental data obtained were compared with those obtained from plain tube published data. The heat transfer coefficient enhancement for 300 RL inserts with 100 mm spacer is quite comparable with for 300 R-L inserts. Performance evaluation analysis has been made and found that the performance ratio increases with increasing Reynolds number and decreasing twist ration with the maximum for the twist ratio 2.93. Also, the performance ratio of more than one indicates that the type of twist inserts can be used effectively for heat transfer augmentation.

Behavioral Signature Generation using Shadow Honeypot

A novel behavioral detection framework is proposed to detect zero day buffer overflow vulnerabilities (based on network behavioral signatures) using zero-day exploits, instead of the signature-based or anomaly-based detection solutions currently available for IDPS techniques. At first we present the detection model that uses shadow honeypot. Our system is used for the online processing of network attacks and generating a behavior detection profile. The detection profile represents the dataset of 112 types of metrics describing the exact behavior of malware in the network. In this paper we present the examples of generating behavioral signatures for two attacks – a buffer overflow exploit on FTP server and well known Conficker worm. We demonstrated the visualization of important aspects by showing the differences between valid behavior and the attacks. Based on these metrics we can detect attacks with a very high probability of success, the process of detection is however very expensive.

Effective Defect Prevention Approach in Software Process for Achieving Better Quality Levels

Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort time is required for 99% - 99.75% of defect elimination. A comparison of the end results for the five selected projects between the companies is also brought about throwing light on the possibility of a particular company to position itself with an appropriate complementary ratio of inspection testing.

Simulation of Particle Damping under Centrifugal Loads

Particle damping is a technique to reduce the structural vibrations by means of placing small metallic particles inside a cavity that is attached to the structure at location of high vibration amplitudes. In this paper, we have presented an analytical model to simulate the particle damping of two dimensional transient vibrations in structure operating under high centrifugal loads. The simulation results show that this technique remains effective as long as the ratio of the dynamic acceleration of the structure to the applied centrifugal load is more than 0.1. Particle damping increases with the increase of particle to structure mass ratio. However, unlike to the case of particle damping in the absence of centrifugal loads where the damping efficiency strongly depends upon the size of the cavity, here this dependence becomes very weak. Despite the simplicity of the model, the simulation results are considerably in good agreement with the very scarce experimental data available in the literature for particle damping under centrifugal loads.

A Frugal Bidding Procedure for Replicating WWW Content

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

Signal Reconstruction Using Cepstrum of Higher Order Statistics

This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.

Reliability Assessment of Bangladesh Power System Using Recursive Algorithm

An electric utility-s main concern is to plan, design, operate and maintain its power supply to provide an acceptable level of reliability to its users. This clearly requires that standards of reliability be specified and used in all three sectors of the power system, i.e., generation, transmission and distribution. That is why reliability of a power system is always a major concern to power system planners. This paper presents the reliability analysis of Bangladesh Power System (BPS). Reliability index, loss of load probability (LOLP) of BPS is evaluated using recursive algorithm and considering no de-rated states of generators. BPS has sixty one generators and a total installed capacity of 5275 MW. The maximum demand of BPS is about 5000 MW. The relevant data of the generators and hourly load profiles are collected from the National Load Dispatch Center (NLDC) of Bangladesh and reliability index 'LOLP' is assessed for the period of last ten years.

Evaluation of Graph-based Analysis for Forest Fire Detections

Spatial outliers in remotely sensed imageries represent observed quantities showing unusual values compared to their neighbor pixel values. There have been various methods to detect the spatial outliers based on spatial autocorrelations in statistics and data mining. These methods may be applied in detecting forest fire pixels in the MODIS imageries from NASA-s AQUA satellite. This is because the forest fire detection can be referred to as finding spatial outliers using spatial variation of brightness temperature. This point is what distinguishes our approach from the traditional fire detection methods. In this paper, we propose a graph-based forest fire detection algorithm which is based on spatial outlier detection methods, and test the proposed algorithm to evaluate its applicability. For this the ordinary scatter plot and Moran-s scatter plot were used. In order to evaluate the proposed algorithm, the results were compared with the MODIS fire product provided by the NASA MODIS Science Team, which showed the possibility of the proposed algorithm in detecting the fire pixels.

Prospective Mathematics Teachers' Views about Using Flash Animations in Mathematics Lessons

The purpose of the study is to determine secondary prospective mathematics teachers- views related to using flash animations in mathematics lessons and to reveal how the sample presentations towards different mathematical concepts altered their views. This is a case study involving three secondary prospective mathematics teachers from a state university in Turkey. The data gathered from two semi-structural interviews. Findings revealed that these animations help understand mathematics meaningfully, relate mathematics and real world, visualization, and comprehend the importance of mathematics. The analysis of the data indicated that the sample presentations enhanced participants- views about using flash animations in mathematics lessons.

A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Coerced Delay and Multi Additive Constraints QoS Routing Schemes

IP networks are evolving from data communication infrastructure into many real-time applications such as video conferencing, IP telephony and require stringent Quality of Service (QoS) requirements. A rudimentary issue in QoS routing is to find a path between a source-destination pair that satisfies two or more endto- end constraints and termed to be NP hard or complete. In this context, we present an algorithm Multi Constraint Path Problem Version 3 (MCPv3), where all constraints are approximated and return a feasible path in much quicker time. We present another algorithm namely Delay Coerced Multi Constrained Routing (DCMCR) where coerce one constraint and approximate the remaining constraints. Our algorithm returns a feasible path, if exists, in polynomial time between a source-destination pair whose first weight satisfied by the first constraint and every other weight is bounded by remaining constraints by a predefined approximation factor (a). We present our experimental results with different topologies and network conditions.

Design for Manufacturability and Concurrent Engineering for Product Development

In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.

Feature Based Unsupervised Intrusion Detection

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.