Visualization and Indexing of Spectral Databases

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Subcritical Water Extraction of Mannitol from Olive Leaves

Subcritical water extraction was investigated as a novel and alternative technology in the food and pharmaceutical industry for the separation of Mannitol from olive leaves and its results was compared with those of Soxhlet extraction. The effects of temperature, pressure, and flow rate of water and also momentum and mass transfer dimensionless variables such as Reynolds and Peclet Numbers on extraction yield and equilibrium partition coefficient were investigated. The 30-110 bars, 60-150°C, and flow rates of 0.2-2 mL/min were the water operating conditions. The results revealed that the highest Mannitol yield was obtained at 100°C and 50 bars. However, extraction of Mannitol was not influenced by the variations of flow rate. The mathematical modeling of experimental measurements was also investigated and the model is capable of predicting the experimental measurements very well. In addition, the results indicated higher extraction yield for the subcritical water extraction in contrast to Soxhlet method.

Target Detection with Improved Image Texture Feature Coding Method and Support Vector Machine

An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.

A Fast Directionally Constrained Minimization of Power Algorithm for Extracting a Speech Signal Perpendicular to a Microphone Array

In this paper, an extended method of the directionally constrained minimization of power (DCMP) algorithm for broadband signals is proposed. The DCMP algorithm is one of the useful techniques of extracting a target signal from observed signals of a microphone array system. In the DCMP algorithm, output power of the microphone array is minimized under a constraint of constant responses to directions of arrival (DOAs) of specific signals. In our algorithm, by limiting the directional constraint to the perpendicular direction to the sensor array system, the calculating time is reduced.

Estimation of Relative Self-Localization Based On Natural Landmark and an Improved SURF

It is important for an autonomous mobile robot to know where it is in any time in an indoor environment. In this paper, we design a relative self-localization algorithm. The algorithm compare the interest point in two images and compute the relative displacement and orientation to determent the posture. Firstly, we use the SURF algorithm to extract the interest points of the ceiling. Second, in order to reduce amount of calculation, a replacement SURF is used to extract orientation and description of the interest points. At last, according to the transformation of the interest points in two images, the relative self-localization of the mobile robot will be estimated greatly.

Recognition of Isolated Speech Signals using Simplified Statistical Parameters

We present a novel scheme to recognize isolated speech signals using certain statistical parameters derived from those signals. The determination of the statistical estimates is based on extracted signal information rather than the original signal information in order to reduce the computational complexity. Subtle details of these estimates, after extracting the speech signal from ambience noise, are first exploited to segregate the polysyllabic words from the monosyllabic ones. Precise recognition of each distinct word is then carried out by analyzing the histogram, obtained from these information.

A Web Text Mining Flexible Architecture

Text Mining is an important step of Knowledge Discovery process. It is used to extract hidden information from notstructured o semi-structured data. This aspect is fundamental because much of the Web information is semi-structured due to the nested structure of HTML code, much of the Web information is linked, much of the Web information is redundant. Web Text Mining helps whole knowledge mining process to mining, extraction and integration of useful data, information and knowledge from Web page contents. In this paper, we present a Web Text Mining process able to discover knowledge in a distributed and heterogeneous multiorganization environment. The Web Text Mining process is based on flexible architecture and is implemented by four steps able to examine web content and to extract useful hidden information through mining techniques. Our Web Text Mining prototype starts from the recovery of Web job offers in which, through a Text Mining process, useful information for fast classification of the same are drawn out, these information are, essentially, job offer place and skills.

Flow Characteristics of Pulp Liquid in Straight Ducts

An experimental investigation was performed on pulp liquid flow in straight ducts with a square cross section. Fully developed steady flow was visualized and the fiber concentration was obtained using a light-section method developed by the author et al. The obtained results reveal quantitatively, in a definite form, the distribution of the fiber concentration. From the results and measurements of pressure loss, it is found that the flow characteristics of pulp liquid in ducts can be classified into five patterns. The relationships among the distributions of mean and fluctuation of fiber concentration, the pressure loss and the flow velocity are discussed, and then the features for each pattern are extracted. The degree of nonuniformity of the fiber concentration, which is indicated by the standard deviation of its distribution, is decreased from 0.3 to 0.05 with an increase in the velocity of the tested pulp liquid from 0.4 to 0.8%.

Antibacterial Capacity of Plumeria alba Petals

Antibacterial activity of Plumeria alba (Frangipani) petals methanolic extracts were evaluated against Escherichia coli, Proteus vulgaris,Staphylococcus aureus, Klebsiella pneumoniae, Pseudomonas aeruginosa, Staphylococcus saprophyticus, Enterococcus faecalis and Serratia marcescens by using disk diffusion method. Concentration extracts (80 %) showed the highest inhibition zone towards Escherichia coli (14.3 mm). Frangipani extract also showed high antibacterial activity against Staphylococcus saprophyticus, Proteus vulgaris and Serratia marcescens, but not more than the zones of the positive control used. Comparison between two broad specrum antibiotics to frangipani extracts showed that the 80 % concentration extracts produce the same zone of inhibition as Streptomycin. Frangipani extracts showed no bacterial activity towards Klebsiella pneumoniae, Pseudomonas aeruginosa and Enterococcus faecalis. There are differences in the sensitivity of different bacteria to frangipani extracts, suggesting that frangipani-s potency varies between these bacteria. The present results indicate that frangipani showed significant antibacterial activity especially to Escherichia coli.

Deployment of Service Quality Characteristics

This work discusses an innovative methodology for deployment of service quality characteristics. Four groups of organizational features that may influence the quality of services are identified: human resource, technology, planning, and organizational relationships. A House of Service Quality (HOSQ) matrix is built to extract the desired improvement in the service quality characteristics and to translate them into a hierarchy of important organizational features. The Mean Square Error (MSE) criterion enables the pinpointing of the few essential service quality characteristics to be improved as well as selection of the vital organizational features. The method was implemented in an engineering supply enterprise and provides useful information on its vital service dimensions.

A Study of Computational Organizational Narrative Generation for Decision Support

Narratives are invaluable assets of human lives. Due to the distinct features of narratives, they are useful for supporting human reasoning processes. However, many useful narratives become residuals in organizations or human minds nowadays. Researchers have contributed effort to investigate and improve narrative generation processes. This paper attempts to contemplate essential components in narratives and explore a computational approach to acquire and extract knowledge to generate narratives. The methodology and significant benefit for decision support are presented.

Data Mining Using Learning Automata

In this paper a data miner based on the learning automata is proposed and is called LA-miner. The LA-miner extracts classification rules from data sets automatically. The proposed algorithm is established based on the function optimization using learning automata. The experimental results on three benchmarks indicate that the performance of the proposed LA-miner is comparable with (sometimes better than) the Ant-miner (a data miner algorithm based on the Ant Colony optimization algorithm) and CNZ (a well-known data mining algorithm for classification).

Effect of Genotype, Explant Type and Growth Regulators on The Accumulation of Flavonoides of (Silybum marianum L.) in In vitro Culture

The extract of milk thistle contains a mix of flavonolignans termed silymarine.. In order to analysis influence of growth regulators, genotype, explant and subculture on the accumulation of flavonolignans, a study was carried out by using two genotype (Budakalszi and Noor abad moghan cultivars), cotyledon and hypocotyle explants, solid media of MS supplemented by different combinations of two growth regulators; Kinetin (0.1, 1 mg/l) and 2,4-D (1, 2 mg/l). Seeds of the plant were germinated in MS media whitout growth regulators in growth chamber at 26°C and darkness condition. In order to callus induction, the culture media was supplemented whit different concentrations of 2,4-D and kinetin. Calli obtained from explants were sub-cultured four times into the fresh media of the first experiment. flavonoides was extracted from calli in four subcultures. The flavonoid components were determined by high- performance liquid choromatography (HPLC) and separated into Taxifolin, Silydianin+Silychristin, Silybin A+B and Isosilybin A+B. Results showed that with increasing callus age, increased accumulation of silybin A+B, but reduced Isosilybin A+B content. Highest accumulation of Taxifolin was observed at first calli. Calli produced from cotyledon explant of Budakalszi cultivar were superior for Silybin A+B, where calli from hypocotyl explant produced higher amount of Taxifolin and Silydianin+Silychristin. The best cultivar for Silymarin production in this study was Budakalszi cultivar. High amount of SBN A+B and TXF were obtained from hypocotil explant.

Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR

This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.

A Framework for Data Mining Based Multi-Agent: An Application to Spatial Data

Data mining is an extraordinarily demanding field referring to extraction of implicit knowledge and relationships, which are not explicitly stored in databases. A wide variety of methods of data mining have been introduced (classification, characterization, generalization...). Each one of these methods includes more than algorithm. A system of data mining implies different user categories,, which mean that the user-s behavior must be a component of the system. The problem at this level is to know which algorithm of which method to employ for an exploratory end, which one for a decisional end, and how can they collaborate and communicate. Agent paradigm presents a new way of conception and realizing of data mining system. The purpose is to combine different algorithms of data mining to prepare elements for decision-makers, benefiting from the possibilities offered by the multi-agent systems. In this paper the agent framework for data mining is introduced, and its overall architecture and functionality are presented. The validation is made on spatial data. Principal results will be presented.

A Robust Method for Encrypted Data Hiding Technique Based on Neighborhood Pixels Information

This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.

On Developing an Automatic Speech Recognition System for Standard Arabic Language

The Automatic Speech Recognition (ASR) applied to Arabic language is a challenging task. This is mainly related to the language specificities which make the researchers facing multiple difficulties such as the insufficient linguistic resources and the very limited number of available transcribed Arabic speech corpora. In this paper, we are interested in the development of a HMM-based ASR system for Standard Arabic (SA) language. Our fundamental research goal is to select the most appropriate acoustic parameters describing each audio frame, acoustic models and speech recognition unit. To achieve this purpose, we analyze the effect of varying frame windowing (size and period), acoustic parameter number resulting from features extraction methods traditionally used in ASR, speech recognition unit, Gaussian number per HMM state and number of embedded re-estimations of the Baum-Welch Algorithm. To evaluate the proposed ASR system, a multi-speaker SA connected-digits corpus is collected, transcribed and used throughout all experiments. A further evaluation is conducted on a speaker-independent continue SA speech corpus. The phonemes recognition rate is 94.02% which is relatively high when comparing it with another ASR system evaluated on the same corpus.

Wavelet and K-L Seperability Based Feature Extraction Method for Functional Data Classification

This paper proposes a novel feature extraction method, based on Discrete Wavelet Transform (DWT) and K-L Seperability (KLS), for the classification of Functional Data (FD). This method combines the decorrelation and reduction property of DWT and the additive independence property of KLS, which is helpful to extraction classification features of FD. It is an advanced approach of the popular wavelet based shrinkage method for functional data reduction and classification. A theory analysis is given in the paper to prove the consistent convergence property, and a simulation study is also done to compare the proposed method with the former shrinkage ones. The experiment results show that this method has advantages in improving classification efficiency, precision and robustness.

Characterization for Post-treatment Effect of Bagasse Ash for Silica Extraction

Utilization of bagasse ash for silica sources is one of the most common application for agricultural wastes and valuable biomass byproducts in sugar milling. The high percentage silica content from bagasse ash was used as silica source for sodium silicate solution. Different heating temperature, time and acid treatment were studies for silica extraction. The silica was characterized using various techniques including X-ray fluorescence, X-ray diffraction, Scanning electron microscopy, and Fourier Transform Infrared Spectroscopy method,. The synthesis conditions were optimized to obtain the bagasse ash with the maximum silica content. The silica content of 91.57 percent was achieved from heating of bagasse ash at 600°C for 3 hours under oxygen feeding and HCl treatment. The result can be used as value added for bagasse ash utilization and minimize the environmental impact of disposal problems.

Extraction of Temporal Relation by the Creation of Historical Natural Disaster Archive

In historical science and social science, the influence of natural disaster upon society is a matter of great interest. In recent years, some archives are made through many hands for natural disasters, however it is inefficiency and waste. So, we suppose a computer system to create a historical natural disaster archive. As the target of this analysis, we consider newspaper articles. The news articles are considered to be typical examples that prescribe the temporal relations of affairs for natural disaster. In order to do this analysis, we identify the occurrences in newspaper articles by some index entries, considering the affairs which are specific to natural disasters, and show the temporal relation between natural disasters. We designed and implemented the automatic system of “extraction of the occurrences of natural disaster" and “temporal relation table for natural disaster."