Artificial Neural Networks for Classifying Magnetic Measurements in Tokamak Reactors

This paper is mainly concerned with the application of a novel technique of data interpretation to the characterization and classification of measurements of plasma columns in Tokamak reactors for nuclear fusion applications. The proposed method exploits several concepts derived from soft computing theory. In particular, Artifical Neural Networks have been exploited to classify magnetic variables useful to determine shape and position of the plasma with a reduced computational complexity. The proposed technique is used to analyze simulated databases of plasma equilibria based on ITER geometry configuration. As well as demonstrating the successful recovery of scalar equilibrium parameters, we show that the technique can yield practical advantages compares with earlier methods.

Technical and Economic Impacts of Distributed Generation on Distribution System

Distributed Generation (DG) in the form of renewable power generation systems is currently preferred for clean power generation. It has a significant impact on the distribution systems. This impact may be either positively or negatively depending on the distribution system, distributed generator and load characteristics. In this works, an overview of DG is briefly introduced. The technology of DG is also listed while the technical impacts and economic impacts are explained.

Preparation a Study on the Use of the Resident Registration Number and Alternatives for RRN

The resident registration number was adopted for the purposes of enhanced services for resident convenience and effective performance of governmental administrative affairs. However, it has been used for identification purposes customarily and irrationally in line with the development and spread of the Internet. In response to the growing concern about the leakage of collected RRNs and possible abuses of stolen RRNs, e.g. identity theft, for crimes, the Korean Communications Commission began to take legal/regulatory actions in 2011 to minimize the online collection and use of resident registration numbers. As the use of the RRN was limited after the revision of the Act on Promotion of Information and Communications Network Utilization and Information Protection, etc., online business providers were required to have alternatives to the RRN for the purpose of identifying the user's identity and age, in compliance with the law, and settling disputes with customers. This paper presents means of verifying the personal identity by taking advantage of the commonly used infrastructure and simply replacing personal information entered and stored, without requiring users to enter their RRNs.

In Search of an SVD and QRcp Based Optimization Technique of ANN for Automatic Classification of Abnormal Heart Sounds

Artificial Neural Network (ANN) has been extensively used for classification of heart sounds for its discriminative training ability and easy implementation. However, it suffers from overparameterization if the number of nodes is not chosen properly. In such cases, when the dataset has redundancy within it, ANN is trained along with this redundant information that results in poor validation. Also a larger network means more computational expense resulting more hardware and time related cost. Therefore, an optimum design of neural network is needed towards real-time detection of pathological patterns, if any from heart sound signal. The aims of this work are to (i) select a set of input features that are effective for identification of heart sound signals and (ii) make certain optimum selection of nodes in the hidden layer for a more effective ANN structure. Here, we present an optimization technique that involves Singular Value Decomposition (SVD) and QR factorization with column pivoting (QRcp) methodology to optimize empirically chosen over-parameterized ANN structure. Input nodes present in ANN structure is optimized by SVD followed by QRcp while only SVD is required to prune undesirable hidden nodes. The result is presented for classifying 12 common pathological cases and normal heart sound.

Self Organizing Analysis Platform for Wear Particle

Integration of system process information obtained through an image processing system with an evolving knowledge database to improve the accuracy and predictability of wear particle analysis is the main focus of the paper. The objective is to automate intelligently the analysis process of wear particle using classification via self organizing maps. This is achieved using relationship measurements among corresponding attributes of various measurements for wear particle. Finally, visualization technique is proposed that helps the viewer in understanding and utilizing these relationships that enable accurate diagnostics.

Building Relationship Network for Machine Analysis from Wear Debris Measurements

Integration of system process information obtained through an image processing system with an evolving knowledge database to improve the accuracy and predictability of wear debris analysis is the main focus of the paper. The objective is to automate intelligently the analysis process of wear particle using classification via self-organizing maps. This is achieved using relationship measurements among corresponding attributes of various measurements for wear debris. Finally, visualization technique is proposed that helps the viewer in understanding and utilizing these relationships that enable accurate diagnostics.

Bridging Quantitative and Qualitative of Glaucoma Detection

Glaucoma diagnosis involves extracting three features of the fundus image; optic cup, optic disc and vernacular. Present manual diagnosis is expensive, tedious and time consuming. A number of researches have been conducted to automate this process. However, the variability between the diagnostic capability of an automated system and ophthalmologist has yet to be established. This paper discusses the efficiency and variability between ophthalmologist opinion and digital technique; threshold. The efficiency and variability measures are based on image quality grading; poor, satisfactory or good. The images are separated into four channels; gray, red, green and blue. A scientific investigation was conducted on three ophthalmologists who graded the images based on the image quality. The images are threshold using multithresholding and graded as done by the ophthalmologist. A comparison of grade from the ophthalmologist and threshold is made. The results show there is a small variability between result of ophthalmologists and digital threshold.

High-performance Second-Generation Controlled Current Conveyor CCCII and High Frequency Applications

In this paper, a modified CCCII is presented. We have used a current mirror with low supply voltage. This circuit is operated at low supply voltage of ±1V. Tspice simulations for TSMC 0.18μm CMOS Technology has shown that the current and voltage bandwidth are respectively 3.34GHz and 4.37GHz, and parasitic resistance at port X has a value of 169.320 for a control current of 120μA. In order to realize this circuit, we have implemented in this first step a universal current mode filter where the frequency can reach the 134.58MHz. In the second step, we have implemented two simulated inductors: one floating and the other grounded. These two inductors are operated in high frequency and variable depending on bias current I0. Finally, we have used the two last inductors respectively to implement two sinusoidal oscillators domains of frequencies respectively: [470MHz, 692MHz], and [358MHz, 572MHz] for bias currents I0 [80μA, 350μA].

Feed-Forward Control in Half-Bridge Resonant DC Link Inverter

This paper proposes a feed-forward control in a halfbridge resonant dc link inverter. The configuration of feed-forward control is based on synchronous sigma-delta modulation and the halfbridge resonant dc link inverter consists of two inductors, one capacitor and two power switches. The simulation results show the proposed technique can reject non-ideal dc bus improving the total harmonic distortion.

Algebraic Approach for the Reconstruction of Linear and Convolutional Error Correcting Codes

In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.

Performance Evaluation of Powder Metallurgy Electrode in Electrical Discharge Machining of AISI D2 Steel Using Taguchi Method

In this paper an attempt has been made to correlate the usefulness of electrodes made through powder metallurgy (PM) in comparison with conventional copper electrode during electric discharge machining. Experimental results are presented on electric discharge machining of AISI D2 steel in kerosene with copper tungsten (30% Cu and 70% W) tool electrode made through powder metallurgy (PM) technique and Cu electrode. An L18 (21 37) orthogonal array of Taguchi methodology was used to identify the effect of process input factors (viz. current, duty cycle and flushing pressure) on the output factors {viz. material removal rate (MRR) and surface roughness (SR)}. It was found that CuW electrode (made through PM) gives high surface finish where as the Cu electrode is better for higher material removal rate.

Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

The Role of Intrinsic Motivation in Explaining Students- Willingness to Use Software Applications

The present study was designed to test the influence of intrinsic ICT-motivation, perceived usefulness and ease of use on business students- willingness to use a particular software package. A questionnaire was completed by 196 business students in Norway. We found that 34% of the variance in the students- willingness to use the software could be explained by the three proposed antecedents. Intrinsic ICT-motivation seems to be the most important predictor of students- satisfaction willingness to use the software package.

3D CAD Models and its Feature Similarity

Knowing the geometrical object pose of products in manufacturing line before robot manipulation is required and less time consuming for overall shape measurement. In order to perform it, the information of shape representation and matching of objects is become required. Objects are compared with its descriptor that conceptually subtracted from each other to form scalar metric. When the metric value is smaller, the object is considered closed to each other. Rotating the object from static pose in some direction introduce the change of value in scalar metric value of boundary information after feature extraction of related object. In this paper, a proposal method for indexing technique for retrieval of 3D geometrical models based on similarity between boundaries shapes in order to measure 3D CAD object pose using object shape feature matching for Computer Aided Testing (CAT) system in production line is proposed. In experimental results shows the effectiveness of proposed method.

Experimental Design and Performance Analysis in Plasma Arc Surface Hardening

In this paper, the experimental design of using the Taguchi method is employed to optimize the processing parameters in the plasma arc surface hardening process. The processing parameters evaluated are arc current, scanning velocity and carbon content of steel. In addition, other significant effects such as the relation between processing parameters are also investigated. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the effects of these processing parameters. Through this study, not only the hardened depth increased and surface roughness improved, but also the parameters that significantly affect the hardening performance are identified. Experimental results are provided to verify the effectiveness of this approach.

Open Cloud Computing with Fault Tolerance

Cloud Computing (CC) has become one of the most talked about emerging technologies that provides powerful computing and large storage environments through the use of the Internet. Cloud computing provides different dynamically scalable computing resources as a service. It brings economic benefits to individuals and businesses that adopt the technology. In theory adoption of cloud computing reduces capital and operational expenditure on information technology. For this to be a reality there is need to solve some challenges and at the same time addressing concerns that consumers have about cloud computing. This paper looks at Cloud Computing in general then highlights the challenges of Cloud Computing and finally suggests solutions to some of the challenges.

Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels

In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.

Analysis of Lower Extremity Muscle Flexibility among Indian Classical Bharathnatyam Dancers

Musculoskeletal problems are common in high performance dance population. This study attempts to identify lower extremity muscle flexibility parameters prevailing among bharatanatyam dancers and analyze if there is any significant difference exist between normal and injured dancers in flexibility parameters. Four hundred and one female dancers and 17 male dancers were participated in this study. Flexibility parameters (hamstring tightness, hip internal and external rotation and tendoachilles in supine and sitting posture) were measured using goniometer. Results of our study it is evident that injured female bharathnatyam dancers had significantly (p < 0.05) high hamstring tightness on left side lower extremity compared to normal female dancers. The range of motion for left tendoachilles was significantly (p < 0.05) high for the normal female group when compared to injured dancers during supine lying posture. Majority of the injured dancers had high hamstring tightness that could be a possible reason for pain and MSDs.

Project Selection by Using a Fuzzy TOPSIS Technique

Selection of a project among a set of possible alternatives is a difficult task that the decision maker (DM) has to face. In this paper, by using a fuzzy TOPSIS technique we propose a new method for a project selection problem. After reviewing four common methods of comparing investment alternatives (net present value, rate of return, benefit cost analysis and payback period) we use them as criteria in a TOPSIS technique. First we calculate the weight of each criterion by a pairwise comparison and then we utilize the improved TOPSIS assessment for the project selection.

Ranking DMUs by Ideal PPS in Data Envelopment Analysis

An original DEA model is to evaluate each DMU optimistically, but the interval DEA Model proposed in this paper has been formulated to obtain an efficiency interval consisting of Evaluations from both the optimistic and the pessimistic view points. DMUs are improved so that their lower bounds become so large as to attain the maximum Value one. The points obtained by this method are called ideal points. Ideal PPS is calculated by ideal of efficiency DMUs. The purpose of this paper is to rank DMUs by this ideal PPS. Finally we extend the efficiency interval of a DMU under variable RTS technology.