Judges System for Classifiers Specialization

In this paper we designed and implemented a new ensemble of classifiers based on a sequence of classifiers which were specialized in regions of the training dataset where errors of its trained homologous are concentrated. In order to separate this regions, and to determine the aptitude of each classifier to properly respond to a new case, it was used another set of classifiers built hierarchically. We explored a selection based variant to combine the base classifiers. We validated this model with different base classifiers using 37 training datasets. It was carried out a statistical comparison of these models with the well known Bagging and Boosting, obtaining significantly superior results with the hierarchical ensemble using Multilayer Perceptron as base classifier. Therefore, we demonstrated the efficacy of the proposed ensemble, as well as its applicability to general problems.

Managing Iterations in Product Design and Development

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Study of the Effectiveness of Solar Heat Gain and Day Light Factors on Minimizing Electricity Use in High Rise Buildings

Over half of the total electricity consumption is used in buildings. Air-conditioning and electric lighting are the two main resources of electricity consumption in high rise buildings. One way to reduce electricity consumption would be to limit heat gain into buildings, therefore reduce the demand for air-conditioning during hot summer months especially in hot regions. On the other hand natural daylight can be used to reduce the use of electricity for artificial lighting. In this paper effective factors on minimizing heat gain and achieving required day light were reviewed .As daylight always accompanied by solar heat gain. Also interactions between heat gain and daylight were discussed through previous studies and equations which are related to heat gain and day lighting especially in high rise buildings. As a result importance of building-s form and its component on energy consumption in buildings were clarified.

Robust H8 Fuzzy Control Design for Nonlinear Two-Time Scale System with Markovian Jumps based on LMI Approach

This paper examines the problem of designing a robust H8 state-feedback controller for a class of nonlinear two-time scale systems with Markovian Jumps described by a Takagi-Sugeno (TS) fuzzy model. Based on a linear matrix inequality (LMI) approach, LMI-based sufficient conditions for the uncertain Markovian jump nonlinear two-time scale systems to have an H8 performance are derived. The proposed approach does not involve the separation of states into slow and fast ones and it can be applied not only to standard, but also to nonstandard nonlinear two-time scale systems. A numerical example is provided to illustrate the design developed in this paper.

Face Localization and Recognition in Varied Expressions and Illumination

In this paper, we propose a robust scheme to work face alignment and recognition under various influences. For face representation, illumination influence and variable expressions are the important factors, especially the accuracy of facial localization and face recognition. In order to solve those of factors, we propose a robust approach to overcome these problems. This approach consists of two phases. One phase is preprocessed for face images by means of the proposed illumination normalization method. The location of facial features can fit more efficient and fast based on the proposed image blending. On the other hand, based on template matching, we further improve the active shape models (called as IASM) to locate the face shape more precise which can gain the recognized rate in the next phase. The other phase is to process feature extraction by using principal component analysis and face recognition by using support vector machine classifiers. The results show that this proposed method can obtain good facial localization and face recognition with varied illumination and local distortion.

Feature-Driven Classification of Musical Styles

In this paper we address the problem of musical style classification, which has a number of applications like indexing in musical databases or automatic composition systems. Starting from MIDI files of real-world improvisations, we extract the melody track and cut it into overlapping segments of equal length. From these fragments, some numerical features are extracted as descriptors of style samples. We show that a standard Bayesian classifier can be conveniently employed to build an effective musical style classifier, once this set of features has been extracted from musical data. Preliminary experimental results show the effectiveness of the developed classifier that represents the first component of a musical audio retrieval system

The Effect of Compost Addition on Chemical and Nitrogen Characteristics, Respiration Activity and Biomass Production in Prepared Reclamation Substrates

Land degradation is of concern in many countries. People more and more must address the problems associated with the degradation of soil properties due to man. Increasingly, organic soil amendments, such as compost are being examined for their potential use in soil restoration and for preventing soil erosion. In the Czech Republic, compost is the most used to improve soil structure and increase the content of soil organic matter. Land reclamation / restoration is one of the ways to evaluate industrially produced compost because Czech farmers are not willing to use compost as organic fertilizer. The most common use of reclamation substrates in the Czech Republic is for the rehabilitation of landfills and contaminated sites. This paper deals with the influence of reclamation substrates (RS) with different proportions of compost and sand on selected soil properties–chemical characteristics, nitrogen bioavailability, leaching of mineral nitrogen, respiration activity and plant biomass production. Chemical properties vary proportionally with addition of compost and sand to the control variant (topsoil). The highest differences between the variants were recorded in leaching of mineral nitrogen (varies from 1.36mg dm-3 in C to 9.09mg dm-3). Addition of compost to soil improves conditions for plant growth in comparison with soil alone. However, too high addition of compost may have adverse effects on plant growth. In addition, high proportion of compost increases leaching of mineral N. Therefore, mixture of 70% of soil with 10% of compost and 20% of sand may be recommended as optimal composition of RS.

Attacks Classification in Adaptive Intrusion Detection using Decision Tree

Recently, information security has become a key issue in information technology as the number of computer security breaches are exposed to an increasing number of security threats. A variety of intrusion detection systems (IDS) have been employed for protecting computers and networks from malicious network-based or host-based attacks by using traditional statistical methods to new data mining approaches in last decades. However, today's commercially available intrusion detection systems are signature-based that are not capable of detecting unknown attacks. In this paper, we present a new learning algorithm for anomaly based network intrusion detection system using decision tree algorithm that distinguishes attacks from normal behaviors and identifies different types of intrusions. Experimental results on the KDD99 benchmark network intrusion detection dataset demonstrate that the proposed learning algorithm achieved 98% detection rate (DR) in comparison with other existing methods.

Fusion Classifier for Open-Set Face Recognition with Pose Variations

A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.

Remote-Sensing Sunspot Images to Obtain the Sunspot Roads

A combination of image fusion and quad tree decomposition method is used for detecting the sunspot trajectories in each month and computation of the latitudes of these trajectories in each solar hemisphere. Daily solar images taken with SOHO satellite are fused for each month and the result of fused image is decomposed with Quad Tree decomposition method in order to classifying the sunspot trajectories and then to achieve the precise information about latitudes of sunspot trajectories. Also with fusion we deduce some physical remarkable conclusions about sun magnetic fields behavior. Using quad tree decomposition we give information about the region on sun surface and the space angle that tremendous flares and hot plasma gases permeate interplanetary space and attack to satellites and human technical systems. Here sunspot images in June, July and August 2001 are used for studying and give a method to compute the latitude of sunspot trajectories in each month with sunspot images.

Neural Network based Texture Analysis of Liver Tumor from Computed Tomography Images

Advances in clinical medical imaging have brought about the routine production of vast numbers of medical images that need to be analyzed. As a result an enormous amount of computer vision research effort has been targeted at achieving automated medical image analysis. Computed Tomography (CT) is highly accurate for diagnosing liver tumors. This study aimed to evaluate the potential role of the wavelet and the neural network in the differential diagnosis of liver tumors in CT images. The tumors considered in this study are hepatocellular carcinoma, cholangio carcinoma, hemangeoma and hepatoadenoma. Each suspicious tumor region was automatically extracted from the CT abdominal images and the textural information obtained was used to train the Probabilistic Neural Network (PNN) to classify the tumors. Results obtained were evaluated with the help of radiologists. The system differentiates the tumor with relatively high accuracy and is therefore clinically useful.

Utilization of 3-N-trimethylamino-1-propanol by Rhodococcus sp. strain A4 isolated from Natural Soil

The aim of this study was to screen for microorganism that able to utilize 3-N-trimethylamino-1-propanol (homocholine) as a sole source of carbon and nitrogen. The aerobic degradation of homocholine has been found by a gram-positive Rhodococcus sp. bacterium isolated from soil. The isolate was identified as Rhodococcus sp. strain A4 based on the phenotypic features, physiologic and biochemical characteristics, and phylogenetic analysis. The cells of the isolated strain grown on both basal-TMAP and nutrient agar medium displayed elementary branching mycelia fragmented into irregular rod and coccoid elements. Comparative 16S rDNA sequencing studies indicated that the strain A4 falls into the Rhodococcus erythropolis subclade and forms a monophyletic group with the type-strains of R. opacus, and R. wratislaviensis. Metabolites analysis by capillary electrophoresis, fast atom bombardment-mass spectrometry, and gas chromatography- mass spectrometry, showed trimethylamine (TMA) as the major metabolite beside β-alanine betaine and trimethylaminopropionaldehyde. Therefore, the possible degradation pathway of trimethylamino propanol in the isolated strain is through consequence oxidation of alcohol group (-OH) to aldehyde (-CHO) and acid (-COOH), and thereafter the cleavage of β-alanine betaine C-N bonds yielded trimethylamine and alkyl chain.

Yield Prediction Using Support Vectors Based Under-Sampling in Semiconductor Process

It is important to predict yield in semiconductor test process in order to increase yield. In this study, yield prediction means finding out defective die, wafer or lot effectively. Semiconductor test process consists of some test steps and each test includes various test items. In other world, test data has a big and complicated characteristic. It also is disproportionably distributed as the number of data belonging to FAIL class is extremely low. For yield prediction, general data mining techniques have a limitation without any data preprocessing due to eigen properties of test data. Therefore, this study proposes an under-sampling method using support vector machine (SVM) to eliminate an imbalanced characteristic. For evaluating a performance, randomly under-sampling method is compared with the proposed method using actual semiconductor test data. As a result, sampling method using SVM is effective in generating robust model for yield prediction.

Implementing a Visual Servoing System for Robot Controlling

Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.

Monitoring Patents Using the Statistical Process Control

The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.

Holistic Face Recognition using Multivariate Approximation, Genetic Algorithms and AdaBoost Classifier: Preliminary Results

Several works regarding facial recognition have dealt with methods which identify isolated characteristics of the face or with templates which encompass several regions of it. In this paper a new technique which approaches the problem holistically dispensing with the need to identify geometrical characteristics or regions of the face is introduced. The characterization of a face is achieved by randomly sampling selected attributes of the pixels of its image. From this information we construct a set of data, which correspond to the values of low frequencies, gradient, entropy and another several characteristics of pixel of the image. Generating a set of “p" variables. The multivariate data set with different polynomials minimizing the data fitness error in the minimax sense (L∞ - Norm) is approximated. With the use of a Genetic Algorithm (GA) it is able to circumvent the problem of dimensionality inherent to higher degree polynomial approximations. The GA yields the degree and values of a set of coefficients of the polynomials approximating of the image of a face. By finding a family of characteristic polynomials from several variables (pixel characteristics) for each face (say Fi ) in the data base through a resampling process the system in use, is trained. A face (say F ) is recognized by finding its characteristic polynomials and using an AdaBoost Classifier from F -s polynomials to each of the Fi -s polynomials. The winner is the polynomial family closer to F -s corresponding to target face in data base.

Constitutive Equations for Human Saphenous Vein Coronary Artery Bypass Graft

Coronary artery bypass grafts (CABG) are widely studied with respect to hemodynamic conditions which play important role in presence of a restenosis. However, papers which concern with constitutive modeling of CABG are lacking in the literature. The purpose of this study is to find a constitutive model for CABG tissue. A sample of the CABG obtained within an autopsy underwent an inflation–extension test. Displacements were recoredered by CCD cameras and subsequently evaluated by digital image correlation. Pressure – radius and axial force – elongation data were used to fit material model. The tissue was modeled as onelayered composite reinforced by two families of helical fibers. The material is assumed to be locally orthotropic, nonlinear, incompressible and hyperelastic. Material parameters are estimated for two strain energy functions (SEF). The first is classical exponential. The second SEF is logarithmic which allows interpretation by means of limiting (finite) strain extensibility. Presented material parameters are estimated by optimization based on radial and axial equilibrium equation in a thick-walled tube. Both material models fit experimental data successfully. The exponential model fits significantly better relationship between axial force and axial strain than logarithmic one.

Biokinetics of Coping Mechanism of Freshwater tilapia following Exposure to Waterborne and Dietary Copper

The purpose of this study was to understand the main sources of copper (Cu) accumulation in target organs of tilapia (Oreochromis mossambicus) and to investigate how the organism mediate the process of Cu accumulation under prolonged conditions. By measuring both dietary and waterborne Cu accumulation and total concentrations in tilapia with biokinetic modeling approach, we were able to clarify the biokinetic coping mechanisms for the long term Cu accumulation. This study showed that water and food are both the major source of Cu for the muscle and liver of tilapia. This implied that control the Cu concentration in these two routes will be correlated to the Cu bioavailability for tilapia. We found that exposure duration and level of waterborne Cu drove the Cu accumulation in tilapia. The ability for Cu biouptake and depuration in organs of tilapia were actively mediated under prolonged exposure conditions. Generally, the uptake rate, depuration rate and net bioaccumulation ability in all selected organs decreased with the increasing level of waterborne Cu and extension of exposure duration.Muscle tissues accounted for over 50%of the total accumulated Cu and played a key role in buffering the Cu burden in the initial period of exposure, alternatively, the liver acted a more important role in the storage of Cu with the extension of exposures. We concluded that assumption of the constant biokinetic rates could lead to incorrect predictions with overestimating the long-term Cu accumulation in ecotoxicological risk assessments.

Self Organizing Mixture Network in Mixture Discriminant Analysis: An Experimental Study

In the recent works related with mixture discriminant analysis (MDA), expectation and maximization (EM) algorithm is used to estimate parameters of Gaussian mixtures. But, initial values of EM algorithm affect the final parameters- estimates. Also, when EM algorithm is applied two times, for the same data set, it can be give different results for the estimate of parameters and this affect the classification accuracy of MDA. Forthcoming this problem, we use Self Organizing Mixture Network (SOMN) algorithm to estimate parameters of Gaussians mixtures in MDA that SOMN is more robust when random the initial values of the parameters are used [5]. We show effectiveness of this method on popular simulated waveform datasets and real glass data set.

Direct Democracy and Social Contract in Ancient Athens

In the present essay, a model of choice by actors is analysedby utilizing the theory of chaos to explain how change comes about. Then, by using ancient and modern sources of literature, the theory of the social contract is analysed as a historical phenomenon that first appeared during the period of Classical Greece. Then, based on the findings of this analysis, the practice of direct democracy and public choice in ancient Athens is analysed, through two historical cases: Eubulus and Lycurgus political program in the second half of the 4th century. The main finding of this research is that these policies can be interpreted as an implementation of a social contract, through which citizens were taking decisions based on rational choice according to economic considerations.