Parkinsons Disease Classification using Neural Network and Feature Selection

In this study, the Multi-Layer Perceptron (MLP)with Back-Propagation learning algorithm are used to classify to effective diagnosis Parkinsons disease(PD).It-s a challenging problem for medical community.Typically characterized by tremor, PD occurs due to the loss of dopamine in the brains thalamic region that results in involuntary or oscillatory movement in the body. A feature selection algorithm along with biomedical test values to diagnose Parkinson disease.Clinical diagnosis is done mostly by doctor-s expertise and experience.But still cases are reported of wrong diagnosis and treatment. Patients are asked to take number of tests for diagnosis.In many cases,not all the tests contribute towards effective diagnosis of a disease.Our work is to classify the presence of Parkinson disease with reduced number of attributes.Original,22 attributes are involved in classify.We use Information Gain to determine the attributes which reduced the number of attributes which is need to be taken from patients.The Artificial neural networks is used to classify the diagnosis of patients.Twenty-Two attributes are reduced to sixteen attributes.The accuracy is in training data set is 82.051% and in the validation data set is 83.333%.

Exponential Particle Swarm Optimization Approach for Improving Data Clustering

In this paper we use exponential particle swarm optimization (EPSO) to cluster data. Then we compare between (EPSO) clustering algorithm which depends on exponential variation for the inertia weight and particle swarm optimization (PSO) clustering algorithm which depends on linear inertia weight. This comparison is evaluated on five data sets. The experimental results show that EPSO clustering algorithm increases the possibility to find the optimal positions as it decrease the number of failure. Also show that (EPSO) clustering algorithm has a smaller quantization error than (PSO) clustering algorithm, i.e. (EPSO) clustering algorithm more accurate than (PSO) clustering algorithm.

Speaker Independent Quranic Recognizer Basedon Maximum Likelihood Linear Regression

An automatic speech recognition system for the formal Arabic language is needed. The Quran is the most formal spoken book in Arabic, it is spoken all over the world. In this research, an automatic speech recognizer for Quranic based speakerindependent was developed and tested. The system was developed based on the tri-phone Hidden Markov Model and Maximum Likelihood Linear Regression (MLLR). The MLLR computes a set of transformations which reduces the mismatch between an initial model set and the adaptation data. It uses the regression class tree, as well as, estimates a set of linear transformations for the mean and variance parameters of a Gaussian mixture HMM system. The 30th Chapter of the Quran, with five of the most famous readers of the Quran, was used for the training and testing of the data. The chapter includes about 2000 distinct words. The advantages of using the Quranic verses as the database in this developed recognizer are the uniqueness of the words and the high level of orderliness between verses. The level of accuracy from the tested data ranged 68 to 85%.

HIV Treatment Planning on a case-by-CASE Basis

This study presents a mathematical modeling approach to the planning of HIV therapies on an individual basis. The model replicates clinical data from typical-progressors to AIDS for all stages of the disease with good agreement. Clinical data from rapid-progressors and long-term non-progressors is also matched by estimation of immune system parameters only. The ability of the model to reproduce these phenomena validates the formulation, a fact which is exploited in the investigation of effective therapies. The therapy investigation suggests that, unlike continuous therapy, structured treatment interruptions (STIs) are able to control the increase in both the drug-sensitive and drug-resistant virus population and, hence, prevent the ultimate progression from HIV to AIDS. The optimization results further suggest that even patients characterised by the same progression type can respond very differently to the same treatment and that the latter should be designed on a case-by-case basis. Such a methodology is presented here.

Long-Term Simulation of Digestive Sound Signals by CEPSTRAL Technique

In this study, an investigation over digestive diseases has been done in which the sound acts as a detector medium. Pursue to the preprocessing the extracted signal in cepstrum domain is registered. After classification of digestive diseases, the system selects random samples based on their features and generates the interest nonstationary, long-term signals via inverse transform in cepstral domain which is presented in digital and sonic form as the output. This structure is updatable or on the other word, by receiving a new signal the corresponding disease classification is updated in the feature domain.

Detecting the Nonlinearity in Time Series from Continuous Dynamic Systems Based on Delay Vector Variance Method

Much time series data is generally from continuous dynamic system. Firstly, this paper studies the detection of the nonlinearity of time series from continuous dynamics systems by applying the Phase-randomized surrogate algorithm. Then, the Delay Vector Variance (DVV) method is introduced into nonlinearity test. The results show that under the different sampling conditions, the opposite detection of nonlinearity is obtained via using traditional test statistics methods, which include the third-order autocovariance and the asymmetry due to time reversal. Whereas the DVV method can perform well on determining nonlinear of Lorenz signal. It indicates that the proposed method can describe the continuous dynamics signal effectively.

Evaluation of Aerodynamic Noise Generation by a Generic Side Mirror

The aerodynamic noise radiation from a side view mirror (SVM) in the high-speed airflow is calculated by the combination of unsteady incompressible fluid flow analysis and acoustic analysis. The transient flow past the generic SVM is simulated with variable turbulence model, namely DES Detached Eddy Simulation and LES (Large Eddy Simulation). Detailed velocity vectors and contour plots of the time-varying velocity and pressure fields are presented along cut planes in the flow-field. Mean and transient pressure are also monitored at several points in the flow field and compared to corresponding experimentally data published in literature. The acoustic predictions made using the Ffowcs-Williams-Hawkins acoustic analogy (FW-H) and the boundary element (BEM).

Using Artificial Neural Network to Forecast Groundwater Depth in Union County Well

A concern that researchers usually face in different applications of Artificial Neural Network (ANN) is determination of the size of effective domain in time series. In this paper, trial and error method was used on groundwater depth time series to determine the size of effective domain in the series in an observation well in Union County, New Jersey, U.S. different domains of 20, 40, 60, 80, 100, and 120 preceding day were examined and the 80 days was considered as effective length of the domain. Data sets in different domains were fed to a Feed Forward Back Propagation ANN with one hidden layer and the groundwater depths were forecasted. Root Mean Square Error (RMSE) and the correlation factor (R2) of estimated and observed groundwater depths for all domains were determined. In general, groundwater depth forecast improved, as evidenced by lower RMSEs and higher R2s, when the domain length increased from 20 to 120. However, 80 days was selected as the effective domain because the improvement was less than 1% beyond that. Forecasted ground water depths utilizing measured daily data (set #1) and data averaged over the effective domain (set #2) were compared. It was postulated that more accurate nature of measured daily data was the reason for a better forecast with lower RMSE (0.1027 m compared to 0.255 m) in set #1. However, the size of input data in this set was 80 times the size of input data in set #2; a factor that may increase the computational effort unpredictably. It was concluded that 80 daily data may be successfully utilized to lower the size of input data sets considerably, while maintaining the effective information in the data set.

Signal Generator Circuit Carrying Information as Embedded Features from Multi-Transducer Signals

A novel circuit for generating a signal embedded with features about data from three sensors is presented. This suggested circuit is making use of a resistance-to-time converter employing a bridge amplifier, an integrator and a comparator. The second resistive sensor (Rz) is transformed into duty cycle. Another bridge with varying resistor, (Ry) in the feedback of an OP AMP is added in series to change the amplitude of the resulting signal in a proportional relationship while keeping the same frequency and duty cycle representing proportional changes in resistors Rx and Rz already mentioned. The resultant output signal carries three types of information embedded as variations of its frequency, duty cycle and amplitude.

Cloud Computing: Changing Cogitation about Computing

Cloud Computing is a new technology that helps us to use the Cloud for compliance our computation needs. Cloud refers to a scalable network of computers that work together like Internet. An important element in Cloud Computing is that we shift processing, managing, storing and implementing our data from, locality into the Cloud; So it helps us to improve the efficiency. Because of it is new technology, it has both advantages and disadvantages that are scrutinized in this article. Then some vanguards of this technology are studied. Afterwards we find out that Cloud Computing will have important roles in our tomorrow life!

Adaptive Kernel Filtering Used in Video Processing

In this paper we present a noise reduction filter for video processing. It is based on the recently proposed two dimensional steering kernel, extended to three dimensions and further augmented to suit the spatial-temporal domain of video processing. Two alternative filters are proposed - the time symmetric kernel and the time asymmetric kernel. The first reduces the noise on single sequences, but to handle the problems at scene shift the asymmetric kernel is introduced. The performance of both are tested on simulated data and on a real video sequence together with the existing steering kernel. The proposed kernels improves the Rooted Mean Squared Error (RMSE) compared to the original steering kernel method on video material.

Study of Features for Hand-printed Recognition

The feature extraction method(s) used to recognize hand-printed characters play an important role in ICR applications. In order to achieve high recognition rate for a recognition system, the choice of a feature that suits for the given script is certainly an important task. Even if a new feature required to be designed for a given script, it is essential to know the recognition ability of the existing features for that script. Devanagari script is being used in various Indian languages besides Hindi the mother tongue of majority of Indians. This research examines a variety of feature extraction approaches, which have been used in various ICR/OCR applications, in context to Devanagari hand-printed script. The study is conducted theoretically and experimentally on more that 10 feature extraction methods. The various feature extraction methods have been evaluated on Devanagari hand-printed database comprising more than 25000 characters belonging to 43 alphabets. The recognition ability of the features have been evaluated using three classifiers i.e. k-NN, MLP and SVM.

Removal of Hydrogen Sulphide from Air by Means of Fibrous Ion Exchangers

The removal of hydrogen sulphide is required for reasons of health, odour problems, safety and corrosivity problems. The means of removing hydrogen sulphide mainly depend on its concentration and kind of medium to be purified. The paper deals with a method of hydrogen sulphide removal from the air by its catalytic oxidation to elemental sulphur with the use of Fe-EDTA complex. The possibility of obtaining fibrous filtering materials able to remove small concentrations of H2S from the air were described. The base of these materials is fibrous ion exchanger with Fe(III)- EDTA complex immobilized on their functional groups. The complex of trivalent iron converts hydrogen sulphide to elemental sulphur. Bivalent iron formed in the reaction is oxidized by the atmospheric oxygen, so complex of trivalent iron is continuously regenerated and the overall process can be accounted as pseudocatalytic. In the present paper properties of several fibrous catalysts based on ion exchangers with different chemical nature (weak acid,weak base and strong base) were described. It was shown that the main parameters affecting the process of catalytic oxidation are:concentration of hydrogen sulphide in the air, relative humidity of the purified air, the process time and the content of Fe-EDTA complex in the fibres. The data presented show that the filtering layers with anion exchange package are much more active in the catalytic processes of hydrogen sulphide removal than cation exchanger and inert materials. In the addition to the nature of the fibres relative air humidity is a critical factor determining efficiency of the material in the air purification from H2S. It was proved that the most promising carrier of the Fe-EDTA catalyst for hydrogen sulphide oxidation are Fiban A-6 and Fiban AK-22 fibres.

Software Maintenance Severity Prediction for Object Oriented Systems

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

The Direct and Indirect Effects of the Achievement Motivation on Nurturing Intellectual Giftedness

Achievement motivation is believed to promote giftedness attracting people to invest in many programs to adopt gifted students providing them with challenging activities. Intellectual giftedness is founded on the fluid intelligence and extends to more specific abilities through the growth and inputs from the achievement motivation. Acknowledging the roles played by the motivation in the development of giftedness leads to an effective nurturing of gifted individuals. However, no study has investigated the direct and indirect effects of the achievement motivation and fluid intelligence on intellectual giftedness. Thus, this study investigated the contribution of motivation factors to giftedness development by conducting tests of fluid intelligence using Cattell Culture Fair Test (CCFT) and analytical abilities using culture reduced test items covering problem solving, pattern recognition, audio-logic, audio-matrices, and artificial language, and self report questionnaire for the motivational factors. A number of 180 highscoring students were selected using CCFT from a leading university in Malaysia. Structural equation modeling was employed using Amos V.16 to determine the direct and indirect effects of achievement motivation factors (self confidence, success, perseverance, competition, autonomy, responsibility, ambition, and locus of control) on the intellectual giftedness. The findings showed that the hypothesized model fitted the data, supporting the model postulates and showed significant and strong direct and indirect effects of the motivation and fluid intelligence on the intellectual giftedness.

Induction Motor Efficiency Estimation using Genetic Algorithm

Due to the high percentage of induction motors in industrial market, there exist a large opportunity for energy savings. Replacement of working induction motors with more efficient ones can be an important resource for energy savings. A calculation of energy savings and payback periods, as a result of such a replacement, based on nameplate motor efficiency or manufacture-s data can lead to large errors [1]. Efficiency of induction motors (IMs) can be extracted using some procedures that use the no-load test results. In the cases that we must estimate the efficiency on-line, some of these procedures can-t be efficient. In some cases the efficiency estimates using the rating values of the motor, but these procedures can have errors due to the different working condition of the motor. In this paper the efficiency of an IM estimated by using the genetic algorithm. The results are compared with the measured values of the torque and power. The results show smaller errors for this procedure compared with the conventional classical procedures, hence the cost of the equipments is reduced and on-line estimation of the efficiency can be made.

Application of Artificial Neural Networks for Temperature Forecasting

In this paper, the application of neural networks to study the design of short-term temperature forecasting (STTF) Systems for Kermanshah city, west of Iran was explored. One important architecture of neural networks named Multi-Layer Perceptron (MLP) to model STTF systems is used. Our study based on MLP was trained and tested using ten years (1996-2006) meteorological data. The results show that MLP network has the minimum forecasting error and can be considered as a good method to model the STTF systems.

Study on Guangzhou's Employment Subcentres and Polycentricity

Since the late 1980s, the new phenomena of 'employment subcentres' or 'polycentricity' has appeared in the metropolises of North American and Western Europe and it has been an interesting topic for academics and researchers. This paper specifically uses one case study-Guangzhou to explore the development and the mechanism of employment subcentres and polycentricity in Chinese metropolises by spatial analysis method on the basis of the first economic census data. In conclusion, the paper regards that the employment subcentres and polycentricity has existed in Chinese metropolises. And that, the mechanism of them is mainly from the secondary industry instead of the tertiary industry in North American and Western Europe

A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Incorporation Mechanism of Stabilizing Simulated Lead-Laden Sludge in Aluminum-Rich Ceramics

This study investigated a strategy of blending lead-laden sludge and Al-rich precursors to reduce the release of metals from the stabilized products. Using PbO as the simulated lead-laden sludge to sinter with γ-Al2O3 by Pb:Al molar ratios of 1:2 and 1:12, PbAl2O4 and PbAl12O19 were formed as final products during the sintering process, respectively. By firing the PbO + γ-Al2O3 mixtures with different Pb/Al molar ratios at 600 to 1000 °C, the lead transformation was determined through X-ray diffraction (XRD) data. In Pb/Al molar ratio of 1/2 system, the formation of PbAl2O4 is initiated at 700 °C, but an effective formation was observed above 750 °C. An intermediate phase, Pb9Al8O21, was detected in the temperature range of 800-900 °C. However, different incorporation behavior for sintering PbO with Al-rich precursors at a Pb/Al molar ratio of 1/12 was observed during the formation of PbAl12O19 in this system. In the sintering process, both temperature and time effect on the formation of PbAl2O4 and PbAl12O19 phases were estimated. Finally, a prolonged leaching test modified from the U.S. Environmental Protection Agency-s toxicity characteristic leaching procedure (TCLP) was used to evaluate the durability of PbO, Pb9Al8O21, PbAl2O4 and PbAl12O19 phases. Comparison for the leaching results of the four phases demonstrated the higher intrinsic resistance of PbAl12O19 against acid attack.