Ensemble Learning with Decision Tree for Remote Sensing Classification

In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.

State Feedback Speed Controller for Turbocharged Diesel Engine and Its Robustness

In this paper, the full state feedback controllers capable of regulating and tracking the speed trajectory are presented. A fourth order nonlinear mean value model of a 448 kW turbocharged diesel engine published earlier is used for the purpose. For designing controllers, the nonlinear model is linearized and represented in state-space form. Full state feedback controllers capable of meeting varying speed demands of drivers are presented. Main focus here is to investigate sensitivity of the controller to the perturbations in the parameters of the original nonlinear model. Suggested controller is shown to be highly insensitive to the parameter variations. This indicates that the controller is likely perform with same accuracy even after significant wear and tear of engine due to its use for years.

Simulation and Parameterization by the Finite Element Method of a C Shape Delectromagnet for Application in the Characterization of Magnetic Properties of Materials

This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.

An Efficient Approach to Mining Frequent Itemsets on Data Streams

The increasing importance of data stream arising in a wide range of advanced applications has led to the extensive study of mining frequent patterns. Mining data streams poses many new challenges amongst which are the one-scan nature, the unbounded memory requirement and the high arrival rate of data streams. In this paper, we propose a new approach for mining itemsets on data stream. Our approach SFIDS has been developed based on FIDS algorithm. The main attempts were to keep some advantages of the previous approach and resolve some of its drawbacks, and consequently to improve run time and memory consumption. Our approach has the following advantages: using a data structure similar to lattice for keeping frequent itemsets, separating regions from each other with deleting common nodes that results in a decrease in search space, memory consumption and run time; and Finally, considering CPU constraint, with increasing arrival rate of data that result in overloading system, SFIDS automatically detect this situation and discard some of unprocessing data. We guarantee that error of results is bounded to user pre-specified threshold, based on a probability technique. Final results show that SFIDS algorithm could attain about 50% run time improvement than FIDS approach.

Semantic Mobility Channel (SMC): Ubiquitous and Mobile Computing Meets the Semantic Web

With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).

Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations

The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.

One-Class Support Vector Machines for Aerial Images Segmentation

Interpretation of aerial images is an important task in various applications. Image segmentation can be viewed as the essential step for extracting information from aerial images. Among many developed segmentation methods, the technique of clustering has been extensively investigated and used. However, determining the number of clusters in an image is inherently a difficult problem, especially when a priori information on the aerial image is unavailable. This study proposes a support vector machine approach for clustering aerial images. Three cluster validity indices, distance-based index, Davies-Bouldin index, and Xie-Beni index, are utilized as quantitative measures of the quality of clustering results. Comparisons on the effectiveness of these indices and various parameters settings on the proposed methods are conducted. Experimental results are provided to illustrate the feasibility of the proposed approach.

Mining Image Features in an Automatic Two-Dimensional Shape Recognition System

The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.

Bayesian Network Model for Students- Laboratory Work Performance Assessment: An Empirical Investigation of the Optimal Construction Approach

There are three approaches to complete Bayesian Network (BN) model construction: total expert-centred, total datacentred, and semi data-centred. These three approaches constitute the basis of the empirical investigation undertaken and reported in this paper. The objective is to determine, amongst these three approaches, which is the optimal approach for the construction of a BN-based model for the performance assessment of students- laboratory work in a virtual electronic laboratory environment. BN models were constructed using all three approaches, with respect to the focus domain, and compared using a set of optimality criteria. In addition, the impact of the size and source of the training, on the performance of total data-centred and semi data-centred models was investigated. The results of the investigation provide additional insight for BN model constructors and contribute to literature providing supportive evidence for the conceptual feasibility and efficiency of structure and parameter learning from data. In addition, the results highlight other interesting themes.

Computer-based Alarm Processing and Presentation Methods in Nuclear Power Plants

Computerized alarm systems have been applied increasingly to nuclear power plants. For existing plants, an add-on computer alarm system is often installed to the control rooms. Alarm avalanches during the plant transients are major problems with the alarm systems in nuclear power plants. Computerized alarm systems can process alarms to reduce the number of alarms during the plant transients. This paper describes various alarm processing methods, an alarm cause tracking function, and various alarm presentation schemes to show alarm information to the operators effectively which are considered during the development of several computerized alarm systems for Korean nuclear power plants and are found to be helpful to the operators.

Towards a Measurement-Based E-Government Portals Maturity Model

The e-government emerging concept transforms the way in which the citizens are dealing with their governments. Thus, the citizens can execute the intended services online anytime and anywhere. This results in great benefits for both the governments (reduces the number of officers) and the citizens (more flexibility and time saving). Therefore, building a maturity model to assess the egovernment portals becomes desired to help in the improvement process of such portals. This paper aims at proposing an egovernment maturity model based on the measurement of the best practices’ presence. The main benefit of such maturity model is to provide a way to rank an e-government portal based on the used best practices, and also giving a set of recommendations to go to the higher stage in the maturity model.

"A Call for School Diversity": A Practical Response to the Supreme Court Decision on Race and American Schools

American public schools should be the place that reflects America-s diverse society. The recent Supreme Court decision to discontinue the use of race as a factor in school admission policies has caused major setbacks in America-s effort to repair its racial divide, to improve public schools, and to provide opportunities for all people, regardless of race or creed. However, educators should not allow such legal decision to hinder their ability to teach children tolerance of others in schools and classrooms in America.

Urban Management and China's Municipal Pattern

Not only is municipal pattern the institution basement of urban management, but it also determines the forms of the management results. There-s a considerable possibility of bankruptcy for China-s current municipal pattern as it-s an overdraft of land deal in fact. Based on the analysis of China-s current municipal pattern, the passage proposed an assumption of a new pattern verified legitimacy by conceptual as well as econometric models. Conclusion is: the added supernumerary value of investment in public goods was not included in China-s current municipal pattern, but hidden in the rising housing prices; we should set housing tax or municipal tax to optimize the municipal pattern, to correct the behavior of local governments and to ensure the regular development of China-s urbanization.

The Application of Hadamard Matrixes in the SNR Enhancement of Optical Time-Domain Reflectometry(OTDR)

Results in one field necessarily give insight into the others, and all have much potential for scientific and technological application. The Hadamard-transform technique once been applied to the spectrometry also has its use in the SNR Enhancement of OTDR. In this report, a new set of code (Simplex-codes) is discussed and where the addition gain of SNR come from is implied.

An Energy Efficient Protocol for Target Localization in Wireless Sensor Networks

Target tracking and localization are important applications in wireless sensor networks. In these applications, sensor nodes collectively monitor and track the movement of a target. They have limited energy supplied by batteries, so energy efficiency is essential for sensor networks. Most existing target tracking protocols need to wake up sensors periodically to perform tracking. Some unnecessary energy waste is thus introduced. In this paper, an energy efficient protocol for target localization is proposed. In order to preserve energy, the protocol fixes the number of sensors for target tracking, but it retains the quality of target localization in an acceptable level. By selecting a set of sensors for target localization, the other sensors can sleep rather than periodically wake up to track the target. Simulation results show that the proposed protocol saves a significant amount of energy and also prolongs the network lifetime.

Effects of Length of Time of Fasting upon Subjective and Objective Variables When Controlling Sleep, Food and Fluid Intakes

Ramadan requires individuals to abstain from food and fluid intake between sunrise and sunset; physiological considerations predict that poorer mood, physical performance and mental performance will result. In addition, any difficulties will be worsened because preparations for fasting and recovery from it often mean that nocturnal sleep is decreased in length, and this independently affects mood and performance. A difficulty of interpretation in many studies is that the observed changes could be due to fasting but also to the decreased length of sleep and altered food and fluid intakes before and after the daytime fasting. These factors were separated in this study, which took place over three separate days and compared the effects of different durations of fasting (4, 8 or 16h) upon a wide variety of measures (including subjective and objective assessments of performance, body composition, dehydration and responses to a short bout of exercise) - but with an unchanged amount of nocturnal sleep, controlled supper the previous evening, controlled intakes at breakfast and daytime naps not being allowed. Many of the negative effects of fasting observed in previous studies were present in this experiment also. These findings indicate that fasting was responsible for many of the changes previously observed, though some effect of sleep loss, particularly if occurring on successive days (as would occur in Ramadan) cannot be excluded.

Tracking Objects in Color Image Sequences: Application to Football Images

In this paper, we present a comparative study between two computer vision systems for objects recognition and tracking, these algorithms describe two different approach based on regions constituted by a set of pixels which parameterized objects in shot sequences. For the image segmentation and objects detection, the FCM technique is used, the overlapping between cluster's distribution is minimized by the use of suitable color space (other that the RGB one). The first technique takes into account a priori probabilities governing the computation of various clusters to track objects. A Parzen kernel method is described and allows identifying the players in each frame, we also show the importance of standard deviation value research of the Gaussian probability density function. Region matching is carried out by an algorithm that operates on the Mahalanobis distance between region descriptors in two subsequent frames and uses singular value decomposition to compute a set of correspondences satisfying both the principle of proximity and the principle of exclusion.

Quality-Driven Business Process Refactoring

Appropriate description of business processes through standard notations has become one of the most important assets for organizations. Organizations must therefore deal with quality faults in business process models such as the lack of understandability and modifiability. These quality faults may be exacerbated if business process models are mined by reverse engineering, e.g., from existing information systems that support those business processes. Hence, business process refactoring is often used, which change the internal structure of business processes whilst its external behavior is preserved. This paper aims to choose the most appropriate set of refactoring operators through the quality assessment concerning understandability and modifiability. These quality features are assessed through well-proven measures proposed in the literature. Additionally, a set of measure thresholds are heuristically established for applying the most promising refactoring operators, i.e., those that achieve the highest quality improvement according to the selected measures in each case.

Selection Initial modes for Belief K-modes Method

The belief K-modes method (BKM) approach is a new clustering technique handling uncertainty in the attribute values of objects in both the cluster construction task and the classification one. Like the standard version of this method, the BKM results depend on the chosen initial modes. So, one selection method of initial modes is developed, in this paper, aiming at improving the performances of the BKM approach. Experiments with several sets of real data show that by considered the developed selection initial modes method, the clustering algorithm produces more accurate results.

Ensembling Adaptively Constructed Polynomial Regression Models

The approach of subset selection in polynomial regression model building assumes that the chosen fixed full set of predefined basis functions contains a subset that is sufficient to describe the target relation sufficiently well. However, in most cases the necessary set of basis functions is not known and needs to be guessed – a potentially non-trivial (and long) trial and error process. In our research we consider a potentially more efficient approach – Adaptive Basis Function Construction (ABFC). It lets the model building method itself construct the basis functions necessary for creating a model of arbitrary complexity with adequate predictive performance. However, there are two issues that to some extent plague the methods of both the subset selection and the ABFC, especially when working with relatively small data samples: the selection bias and the selection instability. We try to correct these issues by model post-evaluation using Cross-Validation and model ensembling. To evaluate the proposed method, we empirically compare it to ABFC methods without ensembling, to a widely used method of subset selection, as well as to some other well-known regression modeling methods, using publicly available data sets.