On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

An ensemble of Weighted Support Vector Machines for Ordinal Regression

Instead of traditional (nominal) classification we investigate the subject of ordinal classification or ranking. An enhanced method based on an ensemble of Support Vector Machines (SVM-s) is proposed. Each binary classifier is trained with specific weights for each object in the training data set. Experiments on benchmark datasets and synthetic data indicate that the performance of our approach is comparable to state of the art kernel methods for ordinal regression. The ensemble method, which is straightforward to implement, provides a very good sensitivity-specificity trade-off for the highest and lowest rank.

Multiscale Analysis and Change Detection Based on a Contrario Approach

Automatic methods of detecting changes through satellite imaging are the object of growing interest, especially beca²use of numerous applications linked to analysis of the Earth’s surface or the environment (monitoring vegetation, updating maps, risk management, etc...). This work implemented spatial analysis techniques by using images with different spatial and spectral resolutions on different dates. The work was based on the principle of control charts in order to set the upper and lower limits beyond which a change would be noted. Later, the a contrario approach was used. This was done by testing different thresholds for which the difference calculated between two pixels was significant. Finally, labeled images were considered, giving a particularly low difference which meant that the number of “false changes” could be estimated according to a given limit.

Geometric Modeling of Illumination on the TFT-LCD Panel using Bezier Surface

In this paper, we propose a geometric modeling of illumination on the patterned image containing etching transistor. This image is captured by a commercial camera during the inspection of a TFT-LCD panel. Inspection of defect is an important process in the production of LCD panel, but the regional difference in brightness, which has a negative effect on the inspection, is due to the uneven illumination environment. In order to solve this problem, we present a geometric modeling of illumination consisting of an interpolation using the least squares method and 3D modeling using bezier surface. Our computational time, by using the sampling method, is shorter than the previous methods. Moreover, it can be further used to correct brightness in every patterned image.

Clustering Mixed Data Using Non-normal Regression Tree for Process Monitoring

In the semiconductor manufacturing process, large amounts of data are collected from various sensors of multiple facilities. The collected data from sensors have several different characteristics due to variables such as types of products, former processes and recipes. In general, Statistical Quality Control (SQC) methods assume the normality of the data to detect out-of-control states of processes. Although the collected data have different characteristics, using the data as inputs of SQC will increase variations of data, require wide control limits, and decrease performance to detect outof- control. Therefore, it is necessary to separate similar data groups from mixed data for more accurate process control. In the paper, we propose a regression tree using split algorithm based on Pearson distribution to handle non-normal distribution in parametric method. The regression tree finds similar properties of data from different variables. The experiments using real semiconductor manufacturing process data show improved performance in fault detecting ability.

Data Mining Classification Methods Applied in Drug Design

Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.

High Quality Speech Coding using Combined Parametric and Perceptual Modules

A novel approach to speech coding using the hybrid architecture is presented. Advantages of parametric and perceptual coding methods are utilized together in order to create a speech coding algorithm assuring better signal quality than in traditional CELP parametric codec. Two approaches are discussed. One is based on selection of voiced signal components that are encoded using parametric algorithm, unvoiced components that are encoded perceptually and transients that remain unencoded. The second approach uses perceptual encoding of the residual signal in CELP codec. The algorithm applied for precise transient selection is described. Signal quality achieved using the proposed hybrid codec is compared to quality of some standard speech codecs.

Bioinformatics Profiling of Missense Mutations

The ability to distinguish missense nucleotide substitutions that contribute to harmful effect from those that do not is a difficult problem usually accomplished through functional in vivo analyses. In this study, instead current biochemical methods, the effects of missense mutations upon protein structure and function were assayed by means of computational methods and information from the databases. For this order, the effects of new missense mutations in exon 5 of PTEN gene upon protein structure and function were examined. The gene coding for PTEN was identified and localized on chromosome region 10q23.3 as the tumor suppressor gene. The utilization of these methods were shown that c.319G>A and c.341T>G missense mutations that were recognized in patients with breast cancer and Cowden disease, could be pathogenic. This method could be use for analysis of missense mutation in others genes.

Low Power Bus Binding Based on Dynamic Bit Reordering

In this paper, the problem of reducing switching activity in on-chip buses at the stage of high-level synthesis is considered, and a high-level low power bus binding based on dynamic bit reordering is proposed. Whereas conventional methods use a fixed bit ordering between variables within a bus, the proposed method switches a bit ordering dynamically to obtain a switching activity reduction. As a result, the proposed method finds a binding solution with a smaller value of total switching activity (TSA). Experimental result shows that the proposed method obtains a binding solution having 12.0-34.9% smaller TSA compared with the conventional methods.

A Novel FFT-Based Frequency Offset Estimator for OFDM Systems

This paper proposes a novel frequency offset (FO) estimator for orthogonal frequency division multiplexing. Simplicity is most significant feature of this algorithm and can be repeated to achieve acceptable accuracy. Also fractional and integer part of FO is estimated jointly with use of the same algorithm. To do so, instead of using conventional algorithms that usually use correlation function, we use DFT of received signal. Therefore, complexity will be reduced and we can do synchronization procedure by the same hardware that is used to demodulate OFDM symbol. Finally, computer simulation shows that the accuracy of this method is better than other conventional methods.

Optimization Approaches for a Complex Dairy Farm Simulation Model

This paper describes the optimization of a complex dairy farm simulation model using two quite different methods of optimization, the Genetic algorithm (GA) and the Lipschitz Branch-and-Bound (LBB) algorithm. These techniques have been used to improve an agricultural system model developed by Dexcel Limited, New Zealand, which describes a detailed representation of pastoral dairying scenarios and contains an 8-dimensional parameter space. The model incorporates the sub-models of pasture growth and animal metabolism, which are themselves complex in many cases. Each evaluation of the objective function, a composite 'Farm Performance Index (FPI)', requires simulation of at least a one-year period of farm operation with a daily time-step, and is therefore computationally expensive. The problem of visualization of the objective function (response surface) in high-dimensional spaces is also considered in the context of the farm optimization problem. Adaptations of the sammon mapping and parallel coordinates visualization are described which help visualize some important properties of the model-s output topography. From this study, it is found that GA requires fewer function evaluations in optimization than the LBB algorithm.

Applying 5S Lean Technology: An Infrastructure for Continuous Process Improvement

This paper presents an application of 5S lean technology to a production facility. Due to increased demand, high product variety, and a push production system, the plant has suffered from excessive wastes, unorganized workstations, and unhealthy work environment. This has translated into increased production cost, frequent delays, and low workers morale. Under such conditions, it has become difficult, if not impossible, to implement effective continuous improvement studies. Hence, the lean project is aimed at diagnosing the production process, streamlining the workflow, removing/reducing process waste, cleaning the production environment, improving plant layout, and organizing workstations. 5S lean technology is utilized for achieving project objectives. The work was a combination of both culture changes and tangible/physical changes on the shop floor. The project has drastically changed the plant and developed the infrastructure for a successful implementation of continuous improvement as well as other best practices and quality initiatives.

Towards CO2 Adsorption Enhancement via Polyethyleneimine Impregnation

To reduce the carbon dioxide emission into the atmosphere, adsorption is believed to be one of the most attractive methods for post-combustion treatment of flue gas. In this work, activated carbon (AC) was modified by polyethylenimine (PEI) via impregnation in order to enhance CO2 adsorption capacity. The adsorbents were produced at 0.04, 0.16, 0.22, 0.25, and 0.28 wt% PEI/AC. The adsorption was carried out at a temperature range from 30 °C to 75 °C and five different gas pressures up to 1 atm. TG-DTA, FT-IR, UV-visible spectrometer, and BET were used to characterize the adsorbents. Effects of PEI loading on the AC for the CO2 adsorption were investigated. Effectiveness of the adsorbents on the CO2 adsorption including CO2 adsorption capacity and adsorption temperature was also investigated. Adsorption capacities of CO2 were enhanced with the increase in the amount of PEI from 0.04 to 0.22 wt% PEI before the capacities decreased onwards from0.25 wt% PEI at 30 °C. The 0.22 wt% PEI/AC showed higher adsorption capacity than the AC for adsorption at 50 °C to 75 °C.

The Potential Use of Nanofilters to Supply Potable Water in Persian Gulf and Oman Sea Watershed Basin

In a world worried about water resources with the shadow of drought and famine looming all around, the quality of water is as important as its quantity. The source of all concerns is the constant reduction of per capita quality water for different uses. Iran With an average annual precipitation of 250 mm compared to the 800 mm world average, Iran is considered a water scarce country and the disparity in the rainfall distribution, the limitations of renewable resources and the population concentration in the margins of desert and water scarce areas have intensified the problem. The shortage of per capita renewable freshwater and its poor quality in large areas of the country, which have saline, brackish or hard water resources, and the profusion of natural and artificial pollutant have caused the deterioration of water quality. Among methods of treatment and use of these waters one can refer to the application of membrane technologies, which have come into focus in recent years due to their great advantages. This process is quite efficient in eliminating multi-capacity ions; and due to the possibilities of production at different capacities, application as treatment process in points of use, and the need for less energy in comparison to Reverse Osmosis processes, it can revolutionize the water and wastewater sector in years to come. The article studied the different capacities of water resources in the Persian Gulf and Oman Sea watershed basins, and processes the possibility of using nanofiltration process to treat brackish and non-conventional waters in these basins.

A Trainable Neural Network Ensemble for ECG Beat Classification

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Effects of Environmental Factors on Polychaete Assemblage in Penang National Park, Malaysia

Macrobenthos distribution along the coastal waters of Penang National Park was studid to estimate the effect of different environmental parameters at three stations, during six sampling months, from June 2010 to April 2011. The aim of this survey was to investigate different environment stress over soft bottom polychaete community along Teluk Ketapang and Pantai Acheh (Penang National Park) over a year period. Variations in the polychaete community were evaluated using univariate and multivariate methods. A total of 604 individuals were examined which was grouped into 23 families. Family Nereidae was the most abundant (22.68%), followed by Spionidae (22.02%), Hesionidae (12.58%), Nephtylidae (9.27%) and Orbiniidae (8.61%). It is noticeable that good results can only be obtained on the basis of good taxonomic resolution. The maximum Shannon-Wiener diversity (H'=2.16) was recorded at distance 200m and 1200m (August 2010) in Teluk Ketapang and lowest value of diversity was found at distance 1200m (December 2010) in Teluk Ketapang.

DEA ANN Approach in Supplier Evaluation System

In Supply Chain Management (SCM), strengthening partnerships with suppliers is a significant factor for enhancing competitiveness. Hence, firms increasingly emphasize supplier evaluation processes. Supplier evaluation systems are basically developed in terms of criteria such as quality, cost, delivery, and flexibility. Because there are many variables to be analyzed, this process becomes hard to execute and needs expertise. On this account, this study aims to develop an expert system on supplier evaluation process by designing Artificial Neural Network (ANN) that is supported with Data Envelopment Analysis (DEA). The methods are applied on the data of 24 suppliers, which have longterm relationships with a medium sized company from German Iron and Steel Industry. The data of suppliers consists of variables such as material quality (MQ), discount of amount (DOA), discount of cash (DOC), payment term (PT), delivery time (DT) and annual revenue (AR). Meanwhile, the efficiency that is generated by using DEA is added to the supplier evaluation system in order to use them as system outputs.

Analytic and Finite Element Solutions for Temperature Profiles in Welding using Varied Heat Source Models

Solutions for the temperature profile around a moving heat source are obtained using both analytic and finite element (FEM) methods. Analytic and FEM solutions are applied to study the temperature profile in welding. A moving heat source is represented using both point heat source and uniform distributed disc heat source models. Analytic solutions are obtained by solving the partial differential equation for energy conservation in a solid, and FEM results are provided by simulating welding using the ANSYS software. Comparison is made for quasi steady state conditions. The results provided by the analytic solutions are in good agreement with results obtained by FEM.

Exponential Stability and Periodicity of a Class of Cellular Neural Networks with Time-Varying Delays

The problem of exponential stability and periodicity for a class of cellular neural networks (DCNNs) with time-varying delays is investigated. By dividing the network state variables into subgroups according to the characters of the neural networks, some sufficient conditions for exponential stability and periodicity are derived via the methods of variation parameters and inequality techniques. These conditions are represented by some blocks of the interconnection matrices. Compared with some previous methods, the method used in this paper does not resort to any Lyapunov function, and the results derived in this paper improve and generalize some earlier criteria established in the literature cited therein. Two examples are discussed to illustrate the main results.

An Energy-Efficient Protocol with Static Clustering for Wireless Sensor Networks

A wireless sensor network with a large number of tiny sensor nodes can be used as an effective tool for gathering data in various situations. One of the major issues in wireless sensor networks is developing an energy-efficient routing protocol which has a significant impact on the overall lifetime of the sensor network. In this paper, we propose a novel hierarchical with static clustering routing protocol called Energy-Efficient Protocol with Static Clustering (EEPSC). EEPSC, partitions the network into static clusters, eliminates the overhead of dynamic clustering and utilizes temporary-cluster-heads to distribute the energy load among high-power sensor nodes; thus extends network lifetime. We have conducted simulation-based evaluations to compare the performance of EEPSC against Low-Energy Adaptive Clustering Hierarchy (LEACH). Our experiment results show that EEPSC outperforms LEACH in terms of network lifetime and power consumption minimization.