Straightness Error Compensation Servo-system for Single-axis Linear Motor Stage

Since straightness error of linear motor stage is hardly dependent upon machining accuracy and assembling accuracy, there is limit on maximum realizable accuracy. To cope with this limitation, this paper proposed a servo system to compensate straightness error of a linear motor stage. The servo system is mounted on the slider of the linear motor stage and moves in the direction of the straightness error so as to compensate the error. From position dependency and repeatability of the straightness error of the slider, a feedforward compensation control is applied to the platform servo control. In the consideration of required fine positioning accuracy, a platform driven by an electro-magnetic actuator is suggested and a sliding mode control was applied. The effectiveness of the sliding mode control was verified along with some experimental results.

Hippocampus Segmentation using a Local Prior Model on its Boundary

Segmentation techniques based on Active Contour Models have been strongly benefited from the use of prior information during their evolution. Shape prior information is captured from a training set and is introduced in the optimization procedure to restrict the evolution into allowable shapes. In this way, the evolution converges onto regions even with weak boundaries. Although significant effort has been devoted on different ways of capturing and analyzing prior information, very little thought has been devoted on the way of combining image information with prior information. This paper focuses on a more natural way of incorporating the prior information in the level set framework. For proof of concept the method is applied on hippocampus segmentation in T1-MR images. Hippocampus segmentation is a very challenging task, due to the multivariate surrounding region and the missing boundary with the neighboring amygdala, whose intensities are identical. The proposed method, mimics the human segmentation way and thus shows enhancements in the segmentation accuracy.

Virulent-GO: Prediction of Virulent Proteins in Bacterial Pathogens Utilizing Gene Ontology Terms

Prediction of bacterial virulent protein sequences can give assistance to identification and characterization of novel virulence-associated factors and discover drug/vaccine targets against proteins indispensable to pathogenicity. Gene Ontology (GO) annotation which describes functions of genes and gene products as a controlled vocabulary of terms has been shown effectively for a variety of tasks such as gene expression study, GO annotation prediction, protein subcellular localization, etc. In this study, we propose a sequence-based method Virulent-GO by mining informative GO terms as features for predicting bacterial virulent proteins. Each protein in the datasets used by the existing method VirulentPred is annotated by using BLAST to obtain its homologies with known accession numbers for retrieving GO terms. After investigating various popular classifiers using the same five-fold cross-validation scheme, Virulent-GO using the single kind of GO term features with an accuracy of 82.5% is slightly better than VirulentPred with 81.8% using five kinds of sequence-based features. For the evaluation of independent test, Virulent-GO also yields better results (82.0%) than VirulentPred (80.7%). When evaluating single kind of feature with SVM, the GO term feature performs much well, compared with each of the five kinds of features.

Corporate Credit Rating using Multiclass Classification Models with order Information

Corporate credit rating prediction using statistical and artificial intelligence (AI) techniques has been one of the attractive research topics in the literature. In recent years, multiclass classification models such as artificial neural network (ANN) or multiclass support vector machine (MSVM) have become a very appealing machine learning approaches due to their good performance. However, most of them have only focused on classifying samples into nominal categories, thus the unique characteristic of the credit rating - ordinality - has been seldom considered in their approaches. This study proposes new types of ANN and MSVM classifiers, which are named OMANN and OMSVM respectively. OMANN and OMSVM are designed to extend binary ANN or SVM classifiers by applying ordinal pairwise partitioning (OPP) strategy. These models can handle ordinal multiple classes efficiently and effectively. To validate the usefulness of these two models, we applied them to the real-world bond rating case. We compared the results of our models to those of conventional approaches. The experimental results showed that our proposed models improve classification accuracy in comparison to typical multiclass classification techniques with the reduced computation resource.

Multi-Context Recurrent Neural Network for Time Series Applications

this paper presents a multi-context recurrent network for time series analysis. While simple recurrent network (SRN) are very popular among recurrent neural networks, they still have some shortcomings in terms of learning speed and accuracy that need to be addressed. To solve these problems, we proposed a multi-context recurrent network (MCRN) with three different learning algorithms. The performance of this network is evaluated on some real-world application such as handwriting recognition and energy load forecasting. We study the performance of this network and we compared it to a very well established SRN. The experimental results showed that MCRN is very efficient and very well suited to time series analysis and its applications.

Investigation of the Possibility to Prepare Supervised Classification Map of Gully Erosion by RS and GIS

This study investigates the possibility providing gully erosion map by the supervised classification of satellite images (ETM+) in two mountainous and plain land types. These land types were the part of Varamin plain, Tehran province, and Roodbar subbasin, Guilan province, as plain and mountain land types, respectively. The position of 652 and 124 ground control points were recorded by GPS respectively in mountain and plain land types. Soil gully erosion, land uses or plant covers were investigated in these points. Regarding ground control points and auxiliary points, training points of gully erosion and other surface features were introduced to software (Ilwis 3.3 Academic). The supervised classified map of gully erosion was prepared by maximum likelihood method and then, overall accuracy of this map was computed. Results showed that the possibility supervised classification of gully erosion isn-t possible, although it need more studies for results generalization to other mountainous regions. Also, with increasing land uses and other surface features in plain physiography, it decreases the classification of accuracy.

Classifying Bio-Chip Data using an Ant Colony System Algorithm

Bio-chips are used for experiments on genes and contain various information such as genes, samples and so on. The two-dimensional bio-chips, in which one axis represent genes and the other represent samples, are widely being used these days. Instead of experimenting with real genes which cost lots of money and much time to get the results, bio-chips are being used for biological experiments. And extracting data from the bio-chips with high accuracy and finding out the patterns or useful information from such data is very important. Bio-chip analysis systems extract data from various kinds of bio-chips and mine the data in order to get useful information. One of the commonly used methods to mine the data is classification. The algorithm that is used to classify the data can be various depending on the data types or number characteristics and so on. Considering that bio-chip data is extremely large, an algorithm that imitates the ecosystem such as the ant algorithm is suitable to use as an algorithm for classification. This paper focuses on finding the classification rules from the bio-chip data using the Ant Colony algorithm which imitates the ecosystem. The developed system takes in consideration the accuracy of the discovered rules when it applies it to the bio-chip data in order to predict the classes.

Combined Sewer Overflow forecasting with Feed-forward Back-propagation Artificial Neural Network

A feed-forward, back-propagation Artificial Neural Network (ANN) model has been used to forecast the occurrences of wastewater overflows in a combined sewerage reticulation system. This approach was tested to evaluate its applicability as a method alternative to the common practice of developing a complete conceptual, mathematical hydrological-hydraulic model for the sewerage system to enable such forecasts. The ANN approach obviates the need for a-priori understanding and representation of the underlying hydrological hydraulic phenomena in mathematical terms but enables learning the characteristics of a sewer overflow from the historical data. The performance of the standard feed-forward, back-propagation of error algorithm was enhanced by a modified data normalizing technique that enabled the ANN model to extrapolate into the territory that was unseen by the training data. The algorithm and the data normalizing method are presented along with the ANN model output results that indicate a good accuracy in the forecasted sewer overflow rates. However, it was revealed that the accurate forecasting of the overflow rates are heavily dependent on the availability of a real-time flow monitoring at the overflow structure to provide antecedent flow rate data. The ability of the ANN to forecast the overflow rates without the antecedent flow rates (as is the case with traditional conceptual reticulation models) was found to be quite poor.

Identification of Aircraft Gas Turbine Engines Temperature Condition

Groundlessness of application probability-statistic methods are especially shown at an early stage of the aviation GTE technical condition diagnosing, when the volume of the information has property of the fuzzy, limitations, uncertainty and efficiency of application of new technology Soft computing at these diagnosing stages by using the fuzzy logic and neural networks methods. It is made training with high accuracy of multiple linear and nonlinear models (the regression equations) received on the statistical fuzzy data basis. At the information sufficiency it is offered to use recurrent algorithm of aviation GTE technical condition identification on measurements of input and output parameters of the multiple linear and nonlinear generalized models at presence of noise measured (the new recursive least squares method (LSM)). As application of the given technique the estimation of the new operating aviation engine D30KU-154 technical condition at height H=10600 m was made.

Software Maintenance Severity Prediction with Soft Computing Approach

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.

Pruning Method of Belief Decision Trees

The belief decision tree (BDT) approach is a decision tree in an uncertain environment where the uncertainty is represented through the Transferable Belief Model (TBM), one interpretation of the belief function theory. The uncertainty can appear either in the actual class of training objects or attribute values of objects to classify. In this paper, we develop a post-pruning method of belief decision trees in order to reduce size and improve classification accuracy on unseen cases. The pruning of decision tree has a considerable intention in the areas of machine learning.

Panoramic Sensor Based Blind Spot Accident Prevention System

There are many automotive accidents due to blind spots and driver inattentiveness. Blind spot is the area that is invisible to the driver's viewpoint without head rotation. Several methods are available for assisting the drivers. Simplest methods are — rear mirrors and wide-angle lenses. But, these methods have a disadvantage of the requirement for human assistance. So, the accuracy of these devices depends on driver. Another approach called an automated approach that makes use of sensors such as sonar or radar. These sensors are used to gather range information. The range information will be processed and used for detecting the collision. The disadvantage of this system is — low angular resolution and limited sensing volumes. This paper is a panoramic sensor based automotive vehicle monitoring..

Multi-algorithmic Iris Authentication System

The paper proposes a novel technique for iris recognition using texture and phase features. Texture features are extracted on the normalized iris strip using Haar Wavelet while phase features are obtained using LOG Gabor Wavelet. The matching scores generated from individual modules are combined using sum of score technique. The system is tested on database obtained from Bath University and Indian Institute of Technology Kanpur and is giving an accuracy of 95.62% and 97.66% respectively. The FAR and FRR of the combined system is also reduced comparatively.

Detecting Email Forgery using Random Forests and Naïve Bayes Classifiers

As emails communications have no consistent authentication procedure to ensure the authenticity, we present an investigation analysis approach for detecting forged emails based on Random Forests and Naïve Bays classifiers. Instead of investigating the email headers, we use the body content to extract a unique writing style for all the possible suspects. Our approach consists of four main steps: (1) The cybercrime investigator extract different effective features including structural, lexical, linguistic, and syntactic evidence from previous emails for all the possible suspects, (2) The extracted features vectors are normalized to increase the accuracy rate. (3) The normalized features are then used to train the learning engine, (4) upon receiving the anonymous email (M); we apply the feature extraction process to produce a feature vector. Finally, using the machine learning classifiers the email is assigned to one of the suspects- whose writing style closely matches M. Experimental results on real data sets show the improved performance of the proposed method and the ability of identifying the authors with a very limited number of features.

Stability of Functionally Graded Beams with Piezoelectric Layers Based on the First Order Shear Deformation Theory

Stability of functionally graded beams with piezoelectric layers subjected to axial compressive load that is simply supported at both ends is studied in this paper. The displacement field of beam is assumed based on first order shear deformation beam theory. Applying the Hamilton's principle, the governing equation is established. The influences of applied voltage, dimensionless geometrical parameter, functionally graded index and piezoelectric thickness on the critical buckling load of beam are presented. To investigate the accuracy of the present analysis, a compression study is carried out with a known data.

Application of Build-up and Wash-off Models for an East-Australian Catchment

Estimation of stormwater pollutants is a pre-requisite for the protection and improvement of the aquatic environment and for appropriate management options. The usual practice for the stormwater quality prediction is performed through water quality modeling. However, the accuracy of the prediction by the models depends on the proper estimation of model parameters. This paper presents the estimation of model parameters for a catchment water quality model developed for the continuous simulation of stormwater pollutants from a catchment to the catchment outlet. The model is capable of simulating the accumulation and transportation of the stormwater pollutants; suspended solids (SS), total nitrogen (TN) and total phosphorus (TP) from a particular catchment. Rainfall and water quality data were collected for the Hotham Creek Catchment (HTCC), Gold Coast, Australia. Runoff calculations from the developed model were compared with the calculated discharges from the widely used hydrological models, WBNM and DRAINS. Based on the measured water quality data, model water quality parameters were calibrated for the above-mentioned catchment. The calibrated parameters are expected to be helpful for the best management practices (BMPs) of the region. Sensitivity analyses of the estimated parameters were performed to assess the impacts of the model parameters on overall model estimations of runoff water quality.

Introductory Design Optimisation of a Machine Tool using a Virtual Machine Concept

Designing modern machine tools is a complex task. A simulation tool to aid the design work, a virtual machine, has therefore been developed in earlier work. The virtual machine considers the interaction between the mechanics of the machine (including structural flexibility) and the control system. This paper exemplifies the usefulness of the virtual machine as a tool for product development. An optimisation study is conducted aiming at improving the existing design of a machine tool regarding weight and manufacturing accuracy at maintained manufacturing speed. The problem can be categorised as constrained multidisciplinary multiobjective multivariable optimisation. Parameters of the control and geometric quantities of the machine are used as design variables. This results in a mix of continuous and discrete variables and an optimisation approach using a genetic algorithm is therefore deployed. The accuracy objective is evaluated according to international standards. The complete systems model shows nondeterministic behaviour. A strategy to handle this based on statistical analysis is suggested. The weight of the main moving parts is reduced by more than 30 per cent and the manufacturing accuracy is improvement by more than 60 per cent compared to the original design, with no reduction in manufacturing speed. It is also shown that interaction effects exist between the mechanics and the control, i.e. this improvement would most likely not been possible with a conventional sequential design approach within the same time, cost and general resource frame. This indicates the potential of the virtual machine concept for contributing to improved efficiency of both complex products and the development process for such products. Companies incorporating such advanced simulation tools in their product development could thus improve its own competitiveness as well as contribute to improved resource efficiency of society at large.

Design of an Intelligent Location Identification Scheme Based On LANDMARC and BPNs

Radio frequency identification (RFID) applications have grown rapidly in many industries, especially in indoor location identification. The advantage of using received signal strength indicator (RSSI) values as an indoor location measurement method is a cost-effective approach without installing extra hardware. Because the accuracy of many positioning schemes using RSSI values is limited by interference factors and the environment, thus it is challenging to use RFID location techniques based on integrating positioning algorithm design. This study proposes the location estimation approach and analyzes a scheme relying on RSSI values to minimize location errors. In addition, this paper examines different factors that affect location accuracy by integrating the backpropagation neural network (BPN) with the LANDMARC algorithm in a training phase and an online phase. First, the training phase computes coordinates obtained from the LANDMARC algorithm, which uses RSSI values and the real coordinates of reference tags as training data for constructing an appropriate BPN architecture and training length. Second, in the online phase, the LANDMARC algorithm calculates the coordinates of tracking tags, which are then used as BPN inputs to obtain location estimates. The results show that the proposed scheme can estimate locations more accurately compared to LANDMARC without extra devices.

Control of Thermal Flow in Machine Tools Using Shape Memory Alloys

In this paper the authors propose and verify an approach to control heat flow in machine tool components. Thermal deformations are a main aspect that affects the accuracy of machining. Due to goals of energy efficiency, thermal basic loads should be reduced. This leads to inhomogeneous and time variant temperature profiles. To counteract these negative consequences, material with high melting enthalpy is used as a method for thermal stabilization. The increased thermal capacity slows down the transient thermal behavior. To account for the delayed thermal equilibrium, a control mechanism for thermal flow is introduced. By varying a gap in a heat flow path the thermal resistance of an assembly can be controlled. This mechanism is evaluated in two experimental setups. First to validate the ability to control the thermal resistance and second to prove the possibility of a self-sufficient option based on the selfsensing abilities of thermal shape memory alloys.

High Capacity Data Hiding based on Predictor and Histogram Modification

In this paper, we propose a high capacity image hiding technology based on pixel prediction and the difference of modified histogram. This approach is used the pixel prediction and the difference of modified histogram to calculate the best embedding point. This approach can improve the predictive accuracy and increase the pixel difference to advance the hiding capacity. We also use the histogram modification to prevent the overflow and underflow. Experimental results demonstrate that our proposed method within the same average hiding capacity can still keep high quality of image and low distortion