Least-Squares Support Vector Machine for Characterization of Clusters of Microcalcifications

Clusters of Microcalcifications (MCCs) are most frequent symptoms of Ductal Carcinoma in Situ (DCIS) recognized by mammography. Least-Square Support Vector Machine (LS-SVM) is a variant of the standard SVM. In the paper, LS-SVM is proposed as a classifier for classifying MCCs as benign or malignant based on relevant extracted features from enhanced mammogram. To establish the credibility of LS-SVM classifier for classifying MCCs, a comparative evaluation of the relative performance of LS-SVM classifier for different kernel functions is made. For comparative evaluation, confusion matrix and ROC analysis are used. Experiments are performed on data extracted from mammogram images of DDSM database. A total of 380 suspicious areas are collected, which contain 235 malignant and 145 benign samples, from mammogram images of DDSM database. A set of 50 features is calculated for each suspicious area. After this, an optimal subset of 23 most suitable features is selected from 50 features by Particle Swarm Optimization (PSO). The results of proposed study are quite promising.

Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

iCCS: Development of a Mobile Web-Based Student Integrated Information System Using Hill Climbing Algorithm

This paper describes a conducive and structured information exchange environment for the students of the College of Computer Studies in Manuel S. Enverga University Foundation in. The system was developed to help the students to check their academic result, manage profile, make self-enlistment and assist the students to manage their academic status that can be viewed also in mobile phones. Developing class schedules in a traditional way is a long process that involves making many numbers of choices. With Hill Climbing Algorithm, however, the process of class scheduling, particularly with regards to courses to be taken by the student aligned with the curriculum, can perform these processes and end up with an optimum solution. The proponent used Rapid Application Development (RAD) for the system development method. The proponent also used the PHP as the programming language and MySQL as the database.

Experimental Study on the Creep Characteristics of FRC Base for Composite Pavement System

The composite pavement system considered in this paper is composed of a functional surface layer, a fiber reinforced asphalt middle layer and a fiber reinforced lean concrete base layer. The mix design of the fiber reinforced lean concrete corresponds to the mix composition of conventional lean concrete but reinforced by fibers. The quasi-absence of research on the durability or long-term performances (fatigue, creep, etc.) of such mix design stresses the necessity to evaluate experimentally the long-term characteristics of this layer composition. This study tests the creep characteristics as one of the long-term characteristics of the fiber reinforced lean concrete layer for composite pavement using a new creep device. The test results reveal that the lean concrete mixed with fiber reinforcement and fly ash develops smaller creep than the conventional lean concrete. The results of the application of the CEB-FIP prediction equation indicate that a modified creep prediction equation should be developed to fit with the new mix design of the layer.

Probabilistic Bhattacharya Based Active Contour Model in Structure Tensor Space

Object identification and segmentation application requires extraction of object in foreground from the background. In this paper the Bhattacharya distance based probabilistic approach is utilized with an active contour model (ACM) to segment an object from the background. In the proposed approach, the Bhattacharya histogram is calculated on non-linear structure tensor space. Based on the histogram, new formulation of active contour model is proposed to segment images. The results are tested on both color and gray images from the Berkeley image database. The experimental results show that the proposed model is applicable to both color and gray images as well as both texture images and natural images. Again in comparing to the Bhattacharya based ACM in ICA space, the proposed model is able to segment multiple object too.

Augmented Reality on Android

Augmented Reality is an application which combines a live view of real-world environment and computer-generated images. This paper studies and demonstrates an efficient Augmented Reality development in the mobile Android environment with the native Java language and Android SDK. Major components include Barcode Reader, File Loader, Marker Detector, Transform Matrix Generator, and a cloud database.

An Application of the Data Mining Methods with Decision Rule

  ankings for output of Chinese main agricultural commodity in the world for 1978, 1980, 1990, 2000, 2006, 2007 and 2008 have been released in United Nations FAO Database. Unfortunately, where the ranking of output of Chinese cotton lint in the world for 2008 was missed. This paper uses sequential data mining methods with decision rules filling this gap. This new data mining method will be help to give a further improvement for United Nations FAO Database.

Micro-Hydrokinetic for Remote Rural Electrification

Standalone micro-hydrokinetic river (MHR) system is one of the promising technologies to be used for remote rural electrification. It simply requires the flow of water instead of elevation or head, leading to expensive civil works. This paper demonstrates an economic benefit offered by a standalone MHR system when compared to the commonly used standalone systems such as solar, wind and diesel generator (DG) at the selected study site in Kwazulu Natal. Wind speed and solar radiation data of the selected rural site have been taken from national aeronautics and space administration (NASA) surface meteorology database. The hybrid optimization model for electric renewable (HOMER) software was used to determine the most feasible solution when using MHR, solar, wind or DG system to supply 5 rural houses. MHR system proved to be the best cost-effective option to consider at the study site due to its low cost of energy (COE) and low net present cost (NPC).

Finding Fuzzy Association Rules Using FWFP-Growth with Linguistic Supports and Confidences

In data mining, the association rules are used to search for the relations of items of the transactions database. Following the data is collected and stored, it can find rules of value through association rules, and assist manager to proceed marketing strategy and plan market framework. In this paper, we attempt fuzzy partition methods and decide membership function of quantitative values of each transaction item. Also, by managers we can reflect the importance of items as linguistic terms, which are transformed as fuzzy sets of weights. Next, fuzzy weighted frequent pattern growth (FWFP-Growth) is used to complete the process of data mining. The method above is expected to improve Apriori algorithm for its better efficiency of the whole association rules. An example is given to clearly illustrate the proposed approach.

Implementation of Geo-knowledge Based Geographic Information System for Estimating Earthquake Hazard Potential at a Metropolitan Area, Gwangju, in Korea

In this study, an inland metropolitan area, Gwangju, in Korea was selected to assess the amplification potential of earthquake motion and provide the information for regional seismic countermeasure. A geographic information system-based expert system was implemented for reliably predicting the spatial geotechnical layers in the entire region of interesting by building a geo-knowledge database. Particularly, the database consists of the existing boring data gathered from the prior geotechnical projects and the surface geo-knowledge data acquired from the site visit. For practical application of the geo-knowledge database to estimate the earthquake hazard potential related to site amplification effects at the study area, seismic zoning maps on geotechnical parameters, such as the bedrock depth and the site period, were created within GIS framework. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site in the study area. KeywordsEarthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.

Elections Management Information Communication System Voter Ballot

Abovepresented work deals with the new scope of application of information and communication technologies for the improvement of the election process in the biased environment. We are introducing a new concept of construction of the information-communication system for the election participant. It consists of four main components: Software, Physical Infrastructure, Structured Information and the Trained Stuff. The Structured Information is the bases of the whole system and is the collection of all possible events (irregularities among them) at the polling stations, which are structured in special templates, forms and integrated in mobile devices.The software represents a package of analytic modules, which operates with the dynamic database. The application of modern communication technologies facilities the immediate exchange of information and of relevant documents between the polling stations and the Server of the participant. No less important is the training of the staff for the proper functioning of the system. The e-training system with various modules should be applied in this respect. The presented methodology is primarily focused on the election processes in the countries of emerging democracies.It can be regarded as the tool for the monitoring of elections process by the political organization(s) and as one of the instruments to foster the spread of democracy in these countries.

A method for Music Classification Based On Perceived Mood Detection for Indian Bollywood Music

A lot of research has been done in the past decade in the field of audio content analysis for extracting various information from audio signal. One such significant information is the "perceived mood" or the "emotions" related to a music or audio clip. This information is extremely useful in applications like creating or adapting the play-list based on the mood of the listener. This information could also be helpful in better classification of the music database. In this paper we have presented a method to classify music not just based on the meta-data of the audio clip but also include the "mood" factor to help improve the music classification. We propose an automated and efficient way of classifying music samples based on the mood detection from the audio data. We in particular try to classify the music based on mood for Indian bollywood music. The proposed method tries to address the following problem statement: Genre information (usually part of the audio meta-data) alone does not help in better music classification. For example the acoustic version of the song "nothing else matters by Metallica" can be classified as melody music and thereby a person in relaxing or chill out mood might want to listen to this track. But more often than not this track is associated with metal / heavy rock genre and if a listener classified his play-list based on the genre information alone for his current mood, the user shall miss out on listening to this track. Currently methods exist to detect mood in western or similar kind of music. Our paper tries to solve the issue for Indian bollywood music from an Indian cultural context

Fuzzy Processing of Uncertain Data

In practice, we often come across situations where it is necessary to make decisions based on incomplete or uncertain data. In control systems it may be due to the unknown exact mathematical model, or its excessive complexity (e.g. nonlinearity) when it is necessary to simplify it, respectively, to solve it using a rule base. In the case of databases, searching data we compare a similarity measure with of the requirements of the selection with stored data, where both the select query and the data itself may contain vague terms, for example in the form of linguistic qualifiers. In this paper, we focus on the processing of uncertain data in databases and demonstrate it on the example multi-criteria decision making in the selection of variants, specified by higher number of technical parameters.

Formulation and Evaluation of Vaginal Suppositories Containing Lactobacillus

The objective of this study was to develop vaginal suppository containing lactobacillus. Four kinds of vaginal suppositories containing Lactobacillus paracasei HL32 were formulated: 1) a conventional suppository with Witepsol H-15 as a base, 2) a conventional suppository with mixed polyethylene glycols (PEGs) as a base, 3) a hollow-type suppository with Witepsol H-15 as a base and 4) a hollow-type suppository with mixed PEGs as a base. The release studies demonstrated that the hollow-type suppository with mixed PEGs as the base gave the highest release of L. paracasei HL32 and was microbiological stable after storage at 2- 8°C over the period of 3 months.

Environmental Interference Cancellation of Speech with the Radial Basis Function Networks: An Experimental Comparison

In this paper, we use Radial Basis Function Networks (RBFN) for solving the problem of environmental interference cancellation of speech signal. We show that the Second Order Thin- Plate Spline (SOTPS) kernel cancels the interferences effectively. For make comparison, we test our experiments on two conventional most used RBFN kernels: the Gaussian and First order TPS (FOTPS) basis functions. The speech signals used here were taken from the OGI Multi-Language Telephone Speech Corpus database and were corrupted with six type of environmental noise from NOISEX-92 database. Experimental results show that the SOTPS kernel can considerably outperform the Gaussian and FOTPS functions on speech interference cancellation problem.

Application of Association Rule Mining in Supplier Selection Criteria

In this paper the application of rule mining in order to review the effective factors on supplier selection is reviewed in the following three sections 1) criteria selecting and information gathering 2) performing association rule mining 3) validation and constituting rule base. Afterwards a few of applications of rule base is explained. Then, a numerical example is presented and analyzed by Clementine software. Some of extracted rules as well as the results are presented at the end.

Voice Disorders Identification Using Hybrid Approach: Wavelet Analysis and Multilayer Neural Networks

This paper presents a new strategy of identification and classification of pathological voices using the hybrid method based on wavelet transform and neural networks. After speech acquisition from a patient, the speech signal is analysed in order to extract the acoustic parameters such as the pitch, the formants, Jitter, and shimmer. Obtained results will be compared to those normal and standard values thanks to a programmable database. Sounds are collected from normal people and patients, and then classified into two different categories. Speech data base is consists of several pathological and normal voices collected from the national hospital “Rabta-Tunis". Speech processing algorithm is conducted in a supervised mode for discrimination of normal and pathology voices and then for classification between neural and vocal pathologies (Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation results will be presented in function of the disease and will be compared with the clinical diagnosis in order to have an objective evaluation of the developed tool.

Text Mining Technique for Data Mining Application

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Multiwavelet and Biological Signal Processing

In this paper we are to find the optimum multiwavelet for compression of electrocardiogram (ECG) signals and then, selecting it for using with SPIHT codec. At present, it is not well known which multiwavelet is the best choice for optimum compression of ECG. In this work, we examine different multiwavelets on 24 sets of ECG data with entirely different characteristics, selected from MIT-BIH database. For assessing the functionality of the different multiwavelets in compressing ECG signals, in addition to known factors such as Compression Ratio (CR), Percent Root Difference (PRD), Distortion (D), Root Mean Square Error (RMSE) in compression literature, we also employed the Cross Correlation (CC) criterion for studying the morphological relations between the reconstructed and the original ECG signal and Signal to reconstruction Noise Ratio (SNR). The simulation results show that the Cardinal Balanced Multiwavelet (cardbal2) by the means of identity (Id) prefiltering method to be the best effective transformation. After finding the most efficient multiwavelet, we apply SPIHT coding algorithm on the transformed signal by this multiwavelet.

Non-negative Principal Component Analysis for Face Recognition

Principle component analysis is often combined with the state-of-art classification algorithms to recognize human faces. However, principle component analysis can only capture these features contributing to the global characteristics of data because it is a global feature selection algorithm. It misses those features contributing to the local characteristics of data because each principal component only contains some levels of global characteristics of data. In this study, we present a novel face recognition approach using non-negative principal component analysis which is added with the constraint of non-negative to improve data locality and contribute to elucidating latent data structures. Experiments are performed on the Cambridge ORL face database. We demonstrate the strong performances of the algorithm in recognizing human faces in comparison with PCA and NREMF approaches.