A Robust LS-SVM Regression

In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate.

The Effects of Processing and Preservation on the Sensory Qualities of Prickly Pear Juice

Prickly pear juice has received renewed attention with regard to the effects of processing and preservation on its sensory qualities (colour, taste, flavour, aroma, astringency, visual browning and overall acceptability). Juice was prepared by homogenizing fruit and treating the pulp with pectinase (Aspergillus niger). Juice treatments applied were sugar addition, acidification, heat-treatment, refrigeration, and freezing and thawing. Prickly pear pulp and juice had unique properties (low pH 3.88, soluble solids 3.68 oBrix and high titratable acidity 0.47). Sensory profiling and descriptive analyses revealed that non-treated juice had a bitter taste with high astringency whereas treated prickly pear was significantly sweeter. All treated juices had a good sensory acceptance with values approximating or exceeding 7. Regression analysis of the consumer sensory attributes for non-treated prickly pear juice indicated an overwhelming rejection, while treated prickly pear juice received overall acceptability. Thus, educed favourable sensory responses and may have positive implications for consumer acceptability.

Power System Security Constrained Economic Dispatch Using Real Coded Quantum Inspired Evolution Algorithm

This paper presents a new optimization technique based on quantum computing principles to solve a security constrained power system economic dispatch problem (SCED). The proposed technique is a population-based algorithm, which uses some quantum computing elements in coding and evolving groups of potential solutions to reach the optimum following a partially directed random approach. The SCED problem is formulated as a constrained optimization problem in a way that insures a secure-economic system operation. Real Coded Quantum-Inspired Evolution Algorithm (RQIEA) is then applied to solve the constrained optimization formulation. Simulation results of the proposed approach are compared with those reported in literature. The outcome is very encouraging and proves that RQIEA is very applicable for solving security constrained power system economic dispatch problem (SCED).

Neural Network Based Determination of Splice Junctions by ROC Analysis

Gene, principal unit of inheritance, is an ordered sequence of nucleotides. The genes of eukaryotic organisms include alternating segments of exons and introns. The region of Deoxyribonucleic acid (DNA) within a gene containing instructions for coding a protein is called exon. On the other hand, non-coding regions called introns are another part of DNA that regulates gene expression by removing from the messenger Ribonucleic acid (RNA) in a splicing process. This paper proposes to determine splice junctions that are exon-intron boundaries by analyzing DNA sequences. A splice junction can be either exon-intron (EI) or intron exon (IE). Because of the popularity and compatibility of the artificial neural network (ANN) in genetic fields; various ANN models are applied in this research. Multi-layer Perceptron (MLP), Radial Basis Function (RBF) and Generalized Regression Neural Networks (GRNN) are used to analyze and detect the splice junctions of gene sequences. 10-fold cross validation is used to demonstrate the accuracy of networks. The real performances of these networks are found by applying Receiver Operating Characteristic (ROC) analysis.

A Study of Panel Logit Model and Adaptive Neuro-Fuzzy Inference System in the Prediction of Financial Distress Periods

The purpose of this paper is to present two different approaches of financial distress pre-warning models appropriate for risk supervisors, investors and policy makers. We examine a sample of the financial institutions and electronic companies of Taiwan Security Exchange (TSE) market from 2002 through 2008. We present a binary logistic regression with paned data analysis. With the pooled binary logistic regression we build a model including more variables in the regression than with random effects, while the in-sample and out-sample forecasting performance is higher in random effects estimation than in pooled regression. On the other hand we estimate an Adaptive Neuro-Fuzzy Inference System (ANFIS) with Gaussian and Generalized Bell (Gbell) functions and we find that ANFIS outperforms significant Logit regressions in both in-sample and out-of-sample periods, indicating that ANFIS is a more appropriate tool for financial risk managers and for the economic policy makers in central banks and national statistical services.

Implementing an Intuitive Reasoner with a Large Weather Database

In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.

Institutional Efficiency of Commonhold Industrial Parks Using a Polynomial Regression Model

Based on assumptions of neo-classical economics and rational choice / public choice theory, this paper investigates the regulation of industrial land use in Taiwan by homeowners associations (HOAs) as opposed to traditional government administration. The comparison, which applies the transaction cost theory and a polynomial regression analysis, manifested that HOAs are superior to conventional government administration in terms of transaction costs and overall efficiency. A case study that compares Taiwan-s commonhold industrial park, NangKang Software Park, to traditional government counterparts using limited data on the costs and returns was analyzed. This empirical study on the relative efficiency of governmental and private institutions justified the important theoretical proposition. Numerical results prove the efficiency of the established model.

Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Graphic Analysis of Genotype by Environment Interaction for Maize Hybrid Yield Using Site Regression Stability Model

Selection of maize (Zea mays) hybrids with wide adaptability across diverse farming environments is important, prior to recommending them to achieve a high rate of hybrid adoption. Grain yield of 14 maize hybrids, tested in a randomized completeblock design with four replicates across 22 environments in Iran, was analyzed using site regression (SREG) stability model. The biplot technique facilitates a visual evaluation of superior genotypes, which is useful for cultivar recommendation and mega-environment identification. The objectives of this study were (i) identification of suitable hybrids with both high mean performance and high stability (ii) to determine mega-environments for maize production in Iran. Biplot analysis identifies two mega-environments in this study. The first mega-environments included KRM, KSH, MGN, DZF A, KRJ, DRB, DZF B, SHZ B, and KHM, where G10 hybrid was the best performing hybrid. The second mega-environment included ESF B, ESF A, and SHZ A, where G4 hybrid was the best hybrid. According to the ideal-hybrid biplot, G10 hybrid was better than all other hybrids, followed by the G1 and G3 hybrids. These hybrids were identified as best hybrids that have high grain yield and high yield stability. GGE biplot analysis provided a framework for identifying the target testing locations that discriminates genotypes that are high yielding and stable.

Correlations between Cleaning Frequency of Reservoir and Water Tower and Parameters of Water Quality

This study was investigated on sampling and analyzing water quality in water reservoir & water tower installed in two kind of residential buildings and school facilities. Data of water quality was collected for correlation analysis with frequency of sanitization of water reservoir through questioning managers of building about the inspection charts recorded on equipment for water reservoir. Statistical software packages (SPSS) were applied to the data of two groups (cleaning frequency and water quality) for regression analysis to determine the optimal cleaning frequency of sanitization. The correlation coefficient (R) in this paper represented the degree of correlation, with values of R ranging from +1 to -1.After investigating three categories of drinking water users; this study found that the frequency of sanitization of water reservoir significantly influenced the water quality of drinking water. A higher frequency of sanitization (more than four times per 1 year) implied a higher quality of drinking water. Results indicated that sanitizing water reservoir & water tower should at least twice annually for achieving the aim of safety of drinking water.

Power Forecasting of Photovoltaic Generation

Photovoltaic power generation forecasting is an important task in renewable energy power system planning and operating. This paper explores the application of neural networks (NN) to study the design of photovoltaic power generation forecasting systems for one week ahead using weather databases include the global irradiance, and temperature of Ghardaia city (south of Algeria) using a data acquisition system. Simulations were run and the results are discussed showing that neural networks Technique is capable to decrease the photovoltaic power generation forecasting error.

Emotional Intelligence and Retention: The Moderating Role of Job Involvement

The main aim of the current study was to examine the effect of emotional intelligence on retention. The study also aimed at analyzing the role of job involvement, as a moderator, in the effect of emotional intelligence on retention. Using data gathered from 241 employees working with hotels and tourism corporations listed in Amman Stock Exchange in Jordan, emotional intelligence, job involvement and retention were measured. Hierarchical regression analyses were used to test the three main hypotheses. Results indicated that retention was related to emotional intelligence. Moreover, the study yielded support for the claim that job involvement had a moderating effect on the relationship between emotional intelligence and retention.

Determinants of the U.S. Current Account

This article provides empirical evidence on the effect of domestic and international factors on the U.S. current account deficit. Linear dynamic regression and vector autoregression models are employed to estimate the relationships during the period from 1986 to 2011. The findings of this study suggest that the current and lagged private saving rate and foreign current account for East Asian economies have played a vital role in affecting the U.S. current account. Additionally, using Granger causality tests and variance decompositions, the change of the productivity growth and foreign domestic demand are determined to influence significantly the change of the U.S. current account. To summarize, the empirical relationship between the U.S. current account deficit and its determinants is sensitive to alternative regression models and specifications.

Burning Rate Response of Solid Fuels in Laminar Boundary Layer

Solid fuel transient burning behavior under oxidizer gas flow is numerically investigated. It is done using analysis of the regression rate responses to the imposed sudden and oscillatory variation at inflow properties. The conjugate problem is considered by simultaneous solution of flow and solid phase governing equations to compute the fuel regression rate. The advection upstream splitting method is used as flow computational scheme in finite volume method. The ignition phase is completely simulated to obtain the exact initial condition for response analysis. The results show that the transient burning effects which lead to the combustion instabilities and intermittent extinctions could be observed in solid fuels as the solid propellants.

A Comparison of Marginal and Joint Generalized Quasi-likelihood Estimating Equations Based On the Com-Poisson GLM: Application to Car Breakdowns Data

In this paper, we apply and compare two generalized estimating equation approaches to the analysis of car breakdowns data in Mauritius. Number of breakdowns experienced by a machinery is a highly under-dispersed count random variable and its value can be attributed to the factors related to the mechanical input and output of that machinery. Analyzing such under-dispersed count observation as a function of the explanatory factors has been a challenging problem. In this paper, we aim at estimating the effects of various factors on the number of breakdowns experienced by a passenger car based on a study performed in Mauritius over a year. We remark that the number of passenger car breakdowns is highly under-dispersed. These data are therefore modelled and analyzed using Com-Poisson regression model. We use the two types of quasi-likelihood estimation approaches to estimate the parameters of the model: marginal and joint generalized quasi-likelihood estimating equation approaches. Under-dispersion parameter is estimated to be around 2.14 justifying the appropriateness of Com-Poisson distribution in modelling underdispersed count responses recorded in this study.

Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

A Study on the Differential Diagnostic Model for Newborn Hearing Loss Screening

According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.

Identification of Aircraft Gas Turbine Engine's Temperature Condition

Groundlessness of application probability-statistic methods are especially shown at an early stage of the aviation GTE technical condition diagnosing, when the volume of the information has property of the fuzzy, limitations, uncertainty and efficiency of application of new technology Soft computing at these diagnosing stages by using the fuzzy logic and neural networks methods. It is made training with high accuracy of multiple linear and nonlinear models (the regression equations) received on the statistical fuzzy data basis. At the information sufficiency it is offered to use recurrent algorithm of aviation GTE technical condition identification on measurements of input and output parameters of the multiple linear and nonlinear generalized models at presence of noise measured (the new recursive least squares method (LSM)). As application of the given technique the estimation of the new operating aviation engine D30KU-154 technical condition at height H=10600 m was made.

Improving the Effectiveness of Software Testing through Test Case Reduction

This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.

Predictors of Academic Achievement of Student ICT Teachers with Different Learning Styles

The main purpose of this study was to determine the predictors of academic achievement of student Information and Communications Technologies (ICT) teachers with different learning styles. Participants were 148 student ICT teachers from Ankara University. Participants were asked to fill out a personal information sheet, the Turkish version of Kolb-s Learning Style Inventory, Weinstein-s Learning and Study Strategies Inventory, Schommer's Epistemological Beliefs Questionnaire, and Eysenck-s Personality Questionnaire. Stepwise regression analyses showed that the statistically significant predictors of the academic achievement of the accommodators were attitudes and high school GPAs; of the divergers was anxiety; of the convergers were gender, epistemological beliefs, and motivation; and of the assimilators were gender, personality, and test strategies. Implications for ICT teaching-learning processes and teacher education are discussed.