The Effect Particle Velocity on the Thickness of Thermally Sprayed Coatings

In this paper, the effect of WC-12Co particle velocity in HVOF thermal spraying process on the coating thickness has been studied. The statistical results show that the spray distance and oxygen-to-fuel ratio are more effective factors on particle characterization and thickness of HVOF thermal spraying coatings. Spray Watch diagnostic system, scanning electron microscopy (SEM), X-ray diffraction and thickness measuring system were used for this purpose.

An Empirical Mode Decomposition Based Method for Action Potential Detection in Neural Raw Data

Information in the nervous system is coded as firing patterns of electrical signals called action potential or spike so an essential step in analysis of neural mechanism is detection of action potentials embedded in the neural data. There are several methods proposed in the literature for such a purpose. In this paper a novel method based on empirical mode decomposition (EMD) has been developed. EMD is a decomposition method that extracts oscillations with different frequency range in a waveform. The method is adaptive and no a-priori knowledge about data or parameter adjusting is needed in it. The results for simulated data indicate that proposed method is comparable with wavelet based methods for spike detection. For neural signals with signal-to-noise ratio near 3 proposed methods is capable to detect more than 95% of action potentials accurately.

Estimation of the Upper Tail Dependence Coefficient for Insurance Loss Data Using an Empirical Copula-Based Approach

Considerable focus in the world of insurance risk quantification is placed on modeling loss values from lines of business (LOBs) that possess upper tail dependence. Copulas such as the Joe, Gumbel and Student-t copula may be used for this purpose. The copula structure imparts a desired level of tail dependence on the joint distribution of claims from the different LOBs. Alternatively, practitioners may possess historical or simulated data that already exhibit upper tail dependence, through the impact of catastrophe events such as hurricanes or earthquakes. In these circumstances, it is not desirable to induce additional upper tail dependence when modeling the joint distribution of the loss values from the individual LOBs. Instead, it is of interest to accurately assess the degree of tail dependence already present in the data. The empirical copula and its associated upper tail dependence coefficient are presented in this paper as robust, efficient means of achieving this goal.

Computer Aided Diagnosis of Polycystic Kidney Disease Using ANN

Many inherited diseases and non-hereditary disorders are common in the development of renal cystic diseases. Polycystic kidney disease (PKD) is a disorder developed within the kidneys in which grouping of cysts filled with water like fluid. PKD is responsible for 5-10% of end-stage renal failure treated by dialysis or transplantation. New experimental models, application of molecular biology techniques have provided new insights into the pathogenesis of PKD. Researchers are showing keen interest for developing an automated system by applying computer aided techniques for the diagnosis of diseases. In this paper a multilayered feed forward neural network with one hidden layer is constructed, trained and tested by applying back propagation learning rule for the diagnosis of PKD based on physical symptoms and test results of urinalysis collected from the individual patients. The data collected from 50 patients are used to train and test the network. Among these samples, 75% of the data used for training and remaining 25% of the data are used for testing purpose. Further, this trained network is used to implement for new samples. The output results in normality and abnormality of the patient.

Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Effect of DG Installation in Distribution System for Voltage Monitoring Scheme

Loss minimization is a long progressing issue mainly in distribution system. Nevertheless its effect led to temperature rise due to significant voltage drop through the distribution line. Thus, compensation scheme should be proper scheduled in the attempt to alleviate the voltage drop phenomenon. Distributed generation has been profoundly known for voltage profile improvement; provided that over-compensation or under-compensation phenomena are avoided. This paper addresses the issue of voltage improvement through different type DG installation. In ensuring optimal sizing and location of the DGs, pre-developed EMEFA technique was made use for this purpose. Incremental loading condition subjected to the system is the concern such that it is beneficial to the power system operator.

An Efficient Burst Errors Combating for Image Transmission over Mobile WPANs

This paper presents an efficient burst error spreading tool. Also, it studies a vital issue in wireless communications, which is the transmission of images over wireless networks. IEEE ZigBee 802.15.4 is a short-range communication standard that could be used for small distance multimedia transmissions. In fact, the ZigBee network is a Wireless Personal Area Network (WPAN), which needs a strong interleaving mechanism for protection against error bursts. Also, it is low power technology and utilized in the Wireless Sensor Networks (WSN) implementation. This paper presents the chaotic interleaving scheme as a data randomization tool for this purpose. This scheme depends on the chaotic Baker map. The mobility effects on the image transmission are studied with different velocity through utilizing the Jakes’ model. A comparison study between the proposed chaotic interleaving scheme and the traditional block and convolutional interleaving schemes for image transmission over a correlated fading channel is presented. The simulation results show the superiority of the proposed chaotic interleaving scheme over the traditional schemes.

Intermolecular Dynamics between Alcohols and Fatty Acid Ester Solvents

This work focused on the interactions which occur between ester solvents and alcohol solutes. The alcohols selected ranged from the simplest alcohol (methanol) to C10-alcohols, and solubility predictions in the form of infinite dilution activity coefficients were made using the Modified UNIFAC Dortmund group contribution model. The model computation was set up on a Microsoft Excel spreadsheet specifically designed for this purpose. It was found that alcohol/ ester interactions yielded an increase in activity coefficients (i.e. became less soluble) with an increase in the size of the ester solvent molecule. Furthermore, activity coefficients decreased with an increase in the size of the alcohol solute. The activity coefficients also decreased with an increase in the degree of unsaturation of the ester hydrocarbon tail. Tertiary alcohols yielded lower activity coefficients than primary alcohols. Finally, cyclic alcohols yielded higher activity coefficients than straight-chain alcohols until a point is reached where the trend is reversed, referred to as the ‘crossover’ point.

Testing Loaded Programs Using Fault Injection Technique

Fault tolerance is critical in many of today's large computer systems. This paper focuses on improving fault tolerance through testing. Moreover, it concentrates on the memory faults: how to access the editable part of a process memory space and how this part is affected. A special Software Fault Injection Technique (SFIT) is proposed for this purpose. This is done by sequentially scanning the memory of the target process, and trying to edit maximum number of bytes inside that memory. The technique was implemented and tested on a group of programs in software packages such as jet-audio, Notepad, Microsoft Word, Microsoft Excel, and Microsoft Outlook. The results from the test sample process indicate that the size of the scanned area depends on several factors. These factors are: process size, process type, and virtual memory size of the machine under test. The results show that increasing the process size will increase the scanned memory space. They also show that input-output processes have more scanned area size than other processes. Increasing the virtual memory size will also affect the size of the scanned area but to a certain limit.

Integrated Drunken Driving Prevention System

As is needless to say; a majority of accidents, which occur, are due to drunk driving. As such, there is no effective mechanism to prevent this. Here we have designed an integrated system for the same purpose. Alcohol content in the driver-s body is detected by means of an infrared breath analyzer placed at the steering wheel. An infrared cell directs infrared energy through the sample and any unabsorbed energy at the other side is detected. The higher the concentration of ethanol, the more infrared absorption occurs (in much the same way that a sunglass lens absorbs visible light, alcohol absorbs infrared light). Thus the alcohol level of the driver is continuously monitored and calibrated on a scale. When it exceeds a particular limit the fuel supply is cutoff. If the device is removed also, the fuel supply will be automatically cut off or an alarm is sounded depending upon the requirement. This does not happen abruptly and special indicators are fixed at the back to avoid inconvenience to other drivers using the highway signals. Frame work for integration of sensors and control module in a scalable multi-agent system is provided .A SMS which contains the current GPS location of the vehicle is sent via a GSM module to the police control room to alert the police. The system is foolproof and the driver cannot tamper with it easily. Thus it provides an effective and cost effective solution for the problem of drunk driving in vehicles.

Novel Rao-Blackwellized Particle Filter for Mobile Robot SLAM Using Monocular Vision

This paper presents the novel Rao-Blackwellised particle filter (RBPF) for mobile robot simultaneous localization and mapping (SLAM) using monocular vision. The particle filter is combined with unscented Kalman filter (UKF) to extending the path posterior by sampling new poses that integrate the current observation which drastically reduces the uncertainty about the robot pose. The landmark position estimation and update is also implemented through UKF. Furthermore, the number of resampling steps is determined adaptively, which seriously reduces the particle depletion problem, and introducing the evolution strategies (ES) for avoiding particle impoverishment. The 3D natural point landmarks are structured with matching Scale Invariant Feature Transform (SIFT) feature pairs. The matching for multi-dimension SIFT features is implemented with a KD-Tree in the time cost of O(log2 N). Experiment results on real robot in our indoor environment show the advantages of our methods over previous approaches.

Removal of Iron from Groundwater by Sulfide Precipitation

Iron in groundwater is one of the problems that render the water unsuitable for drinking. The concentration above 0.3 mg/L is common in groundwater. The conventional method of removal is by precipitation under oxic condition. In this study, iron removal under anaerobic conditions was examined by batch experiment as a main purpose. The process involved by purging of groundwater samples with H2S to form iron sulfide. Removal up to 83% for 1 mg/L iron solution was achieved. The removal efficiency dropped to 82% and 75% for the higher initial iron concentrations 3.55 and 5.01 mg/L, respectively. The average residual sulfide concentration in water after the process was 25*g/L. The Eh level during the process was -272 mV. The removal process was found to follow the first order reaction with average rate constant of 4.52 x 10-3. The half-life for the concentrations to reduce from initial values was 157 minutes.

Simultaneous Term Structure Estimation of Hazard and Loss Given Default with a Statistical Model using Credit Rating and Financial Information

The objective of this study is to propose a statistical modeling method which enables simultaneous term structure estimation of the risk-free interest rate, hazard and loss given default, incorporating the characteristics of the bond issuing company such as credit rating and financial information. A reduced form model is used for this purpose. Statistical techniques such as spline estimation and Bayesian information criterion are employed for parameter estimation and model selection. An empirical analysis is conducted using the information on the Japanese bond market data. Results of the empirical analysis confirm the usefulness of the proposed method.

An Experimental Study on Evacuated Tube Solar Collector for Heating of Air in India

A solar powered air heating system using one ended evacuated tubes is experimentally investigated. A solar air heater containing forty evacuated tubes is used for heating purpose. The collector surface area is about 4.44 m2. The length and outer diameters of the outer glass tube and absorber tube are 1500, 47 and 37 mm, respectively. In this experimental setup, we have a header (heat exchanger) of square shape (190 mm x 190 mm). The length of header is 1500 mm. The header consists of a hollow pipe in the center whose diameter is 60 mm through which the air is made to flow. The experimental setup contains approximately 108 liters of water. Water is working as heat collecting medium which collects the solar heat falling on the tubes. This heat is delivered to the air flowing through the header pipe. This heat flow is due to natural convection and conduction. The outlet air temperature depends upon several factors along with air flow rate and solar radiation intensity. The study has been done for both up-flow and down-flow of air in header in similar weather conditions, at different flow rates. In the present investigations the study has been made to find the effect of intensity of solar radiations and flow rate of air on the out let temperature of the air with time and which flow is more efficient. The obtained results show that the system is highly effective for the heating in this region. Moreover, it has been observed that system is highly efficient for the particular flow rate of air. It was also observed that downflow configuration is more effective than up-flow condition at all flow rates due to lesser losses in down-flow. The results show that temperature differences of upper head and lower head, both of water and surface of pipes on the respective ends is lower in down-flow.

Comparison between Associative Classification and Decision Tree for HCV Treatment Response Prediction

Combined therapy using Interferon and Ribavirin is the standard treatment in patients with chronic hepatitis C. However, the number of responders to this treatment is low, whereas its cost and side effects are high. Therefore, there is a clear need to predict patient’s response to the treatment based on clinical information to protect the patients from the bad drawbacks, Intolerable side effects and waste of money. Different machine learning techniques have been developed to fulfill this purpose. From these techniques are Associative Classification (AC) and Decision Tree (DT). The aim of this research is to compare the performance of these two techniques in the prediction of virological response to the standard treatment of HCV from clinical information. 200 patients treated with Interferon and Ribavirin; were analyzed using AC and DT. 150 cases had been used to train the classifiers and 50 cases had been used to test the classifiers. The experiment results showed that the two techniques had given acceptable results however the best accuracy for the AC reached 92% whereas for DT reached 80%.

An Image Encryption Method with Magnitude and Phase Manipulation using Carrier Images

We describe an effective method for image encryption which employs magnitude and phase manipulation using carrier images. Although it involves traditional methods like magnitude and phase encryptions, the novelty of this work lies in deploying the concept of carrier images for encryption purpose. To this end, a carrier image is randomly chosen from a set of stored images. One dimensional (1-D) discrete Fourier transform (DFT) is then carried out on the original image to be encrypted along with the carrier image. Row wise spectral addition and scaling is performed between the magnitude spectra of the original and carrier images by randomly selecting the rows. Similarly, row wise phase addition and scaling is performed between the original and carrier images phase spectra by randomly selecting the rows. The encrypted image obtained by these two operations is further subjected to one more level of magnitude and phase manipulation using another randomly chosen carrier image by 1-D DFT along the columns. The resulting encrypted image is found to be fully distorted, resulting in increasing the robustness of the proposed work. Further, applying the reverse process at the receiver, the decrypted image is found to be distortionless.

U.S. Nuclear Regulatory CommissionTraining for Research and Training Reactor Inspectors

Currently, a large number of license activities (Early Site Permits, Combined Operating License, reactor certifications, etc.), are pending for review before the United States Nuclear Regulatory Commission (US NRC). Much of the senior staff at the NRC is now committed to these review and licensing actions. To address this additional workload, the NRC has recruited a large number of new Regulatory Staff for dealing with these and other regulatory actions such as the US Fleet of Research and Test Reactors (RTRs). These reactors pose unusual demands on Regulatory Staff since the US Fleet of RTRs, although few (32 Licensed RTRs as of 2010), they represent a broad range of reactor types, operations, and research and training aspects that nuclear reactor power plants (such as the 104 LWRs) do not pose. The NRC must inspect and regulate all these facilities. This paper addresses selected training topics and regulatory activities providedNRC Inspectors for RTRs.

Feature Extraction from Aerial Photos

In Geographic Information System, one of the sources of obtaining needed geographic data is digitizing analog maps and evaluation of aerial and satellite photos. In this study, a method will be discussed which can be used to extract vectorial features and creating vectorized drawing files for aerial photos. At the same time a software developed for these purpose. Converting from raster to vector is also known as vectorization and it is the most important step when creating vectorized drawing files. In the developed algorithm, first of all preprocessing on the aerial photo is done. These are; converting to grayscale if necessary, reducing noise, applying some filters and determining the edge of the objects etc. After these steps, every pixel which constitutes the photo are followed from upper left to right bottom by examining its neighborhood relationship and one pixel wide lines or polylines obtained. The obtained lines have to be erased for preventing confusion while continuing vectorization because if not erased they can be perceived as new line, but if erased it can cause discontinuity in vector drawing so the image converted from 2 bit to 8 bit and the detected pixels are expressed as a different bit. In conclusion, the aerial photo can be converted to vector form which includes lines and polylines and can be opened in any CAD application.

Advanced Travel Information System in Heterogeneous Networks

In order to achieve better road utilization and traffic efficiency, there is an urgent need for a travel information delivery mechanism to assist the drivers in making better decisions in the emerging intelligent transportation system applications. In this paper, we propose a relayed multicast scheme under heterogeneous networks for this purpose. In the proposed system, travel information consisting of summarized traffic conditions, important events, real-time traffic videos, and local information service contents is formed into layers and multicasted through an integration of WiMAX infrastructure and Vehicular Ad hoc Networks (VANET). By the support of adaptive modulation and coding in WiMAX, the radio resources can be optimally allocated when performing multicast so as to dynamically adjust the number of data layers received by the users. In addition to multicast supported by WiMAX, a knowledge propagation and information relay scheme by VANET is designed. The experimental results validate the feasibility and effectiveness of the proposed scheme.

Security Risk Analysis Based on the Policy Formalization and the Modeling of Big Systems

Security risk models have been successful in estimating the likelihood of attack for simple security threats. However, modeling complex system and their security risk is even a challenge. Many methods have been proposed to face this problem. Often difficult to manipulate, and not enough all-embracing they are not as famous as they should with administrators and deciders. We propose in this paper a new tool to model big systems on purpose. The software, takes into account attack threats and security strength.