Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine

In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.

Local Spectrum Feature Extraction for Face Recognition

This paper presents two techniques, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapped on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non- Gaussian in the feature space and by using combination of several Gaussian functions that has different statistical properties, the best feature representation can be modelled using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculated GMM components. The method is tested using FERET datasets and is able to achieved 92% recognition rates.

Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption

This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.

A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting

Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organise the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that nonparametric analysis offer potential results as the ones mentioned in the literature.

Similarity Based Retrieval in Case Based Reasoning for Analysis of Medical Images

Content Based Image Retrieval (CBIR) coupled with Case Based Reasoning (CBR) is a paradigm that is becoming increasingly popular in the diagnosis and therapy planning of medical ailments utilizing the digital content of medical images. This paper presents a survey of some of the promising approaches used in the detection of abnormalities in retina images as well in mammographic screening and detection of regions of interest in MRI scans of the brain. We also describe our proposed algorithm to detect hard exudates in fundus images of the retina of Diabetic Retinopathy patients.

Survey on Strategic Games and Decision Making

Game theory is the study of how people interact and make decisions to handle competitive situations. It has mainly been developed to study decision making in complex situations. Humans routinely alter their behaviour in response to changes in their social and physical environment. As a consequence, the outcomes of decisions that depend on the behaviour of multiple decision makers are difficult to predict and require highly adaptive decision-making strategies. In addition to the decision makers may have preferences regarding consequences to other individuals and choose their actions to improve or reduce the well-being of others. Nash equilibrium is a fundamental concept in the theory of games and the most widely used method of predicting the outcome of a strategic interaction in the social sciences. A Nash Equilibrium exists when there is no unilateral profitable deviation from any of the players involved. On the other hand, no player in the game would take a different action as long as every other player remains the same.

Feature-Based Summarizing and Ranking from Customer Reviews

Due to the rapid increase of Internet, web opinion sources dynamically emerge which is useful for both potential customers and product manufacturers for prediction and decision purposes. These are the user generated contents written in natural languages and are unstructured-free-texts scheme. Therefore, opinion mining techniques become popular to automatically process customer reviews for extracting product features and user opinions expressed over them. Since customer reviews may contain both opinionated and factual sentences, a supervised machine learning technique applies for subjectivity classification to improve the mining performance. In this paper, we dedicate our work is the task of opinion summarization. Therefore, product feature and opinion extraction is critical to opinion summarization, because its effectiveness significantly affects the identification of semantic relationships. The polarity and numeric score of all the features are determined by Senti-WordNet Lexicon. The problem of opinion summarization refers how to relate the opinion words with respect to a certain feature. Probabilistic based model of supervised learning will improve the result that is more flexible and effective.

Design of Real Time Early Response Systems for Natural Disaster Management Based On Automation and Control Technologies

A new concept of response system is proposed for filling the gap that exists in reducing vulnerability during immediate response to natural disasters. Real Time Early Response Systems (RTERSs) incorporate real time information as feedback data for closing control loop and for generating real time situation assessment. A review of the state of the art on works that fit the concept of RTERS is presented, and it is found that they are mainly focused on manmade disasters. At the same time, in response phase of natural disaster management many works are involved in creating early warning systems, but just few efforts have been put on deciding what to do once an alarm is activated. In this context a RTERS arises as a useful tool for supporting people in their decision making process during natural disasters after an event is detected, and also as an innovative context for applying well-known automation technologies and automatic control concepts and tools.

Customization of Moodle Open Source LMS for Tanzania Secondary Schools’ Use

Moodle is an open source learning management system that enables creation of a powerful and flexible learning environment. Many organizations, especially learning institutions have customized Moodle open source LMS for their own use. In general open source LMSs are of great interest due to many advantages they offer in terms of cost, usage and freedom to customize to fit a particular context. Tanzania Secondary School e- Learning (TanSSe-L) system is the learning management system for Tanzania secondary schools. TanSSe-L system was developed using a number of methods, one of them being customization of Moodle Open Source LMS. This paper presents few areas on the way Moodle OS LMS was customized to produce a functional TanSSe-L system fitted to the requirements and specifications of Tanzania secondary schools’ context.

Fuzzy Based Visual Texture Feature for Psoriasis Image Analysis

This paper proposes a rotational invariant texture feature based on the roughness property of the image for psoriasis image analysis. In this work, we have applied this feature for image classification and segmentation. The fuzzy concept is employed to overcome the imprecision of roughness. Since the psoriasis lesion is modeled by a rough surface, the feature is extended for calculating the Psoriasis Area Severity Index value. For classification and segmentation, the Nearest Neighbor algorithm is applied. We have obtained promising results for identifying affected lesions by using the roughness index and severity level estimation.

Parameters Used in Gateway Selection Schemes for Internet Connected MANETs: A Review

The wide use of the Internet-based applications bring many challenges to the researchers to guarantee the continuity of the connections needed by the mobile hosts and provide reliable Internet access for them. One of proposed solutions by Internet Engineering Task Force (IETF) is to connect the local, multi-hop, and infrastructure-less Mobile Ad hoc Network (MANET) with Internet structure. This connection is done through multi-interface devices known as Internet Gateways. Many issues are related to this connection like gateway discovery, handoff, address auto-configuration and selecting the optimum gateway when multiple gateways exist. Many studies were done proposing gateway selection schemes with a single selection criterion or weighted multiple criteria. In this research, a review of some of these schemes is done showing the differences, the features, the challenges and the drawbacks of each of them.

Edge Detection in Low Contrast Images

The edges of low contrast images are not clearly distinguishable to human eye. It is difficult to find the edges and boundaries in it. The present work encompasses a new approach for low contrast images. The Chebyshev polynomial based fractional order filter has been used for filtering operation on an image. The preprocessing has been performed by this filter on the input image. Laplacian of Gaussian method has been applied on preprocessed image for edge detection. The algorithm has been tested on two test images.

Analysis of Combined Use of NN and MFCC for Speech Recognition

The performance and analysis of speech recognition system is illustrated in this paper. An approach to recognize the English word corresponding to digit (0-9) spoken by 2 different speakers is captured in noise free environment. For feature extraction, speech Mel frequency cepstral coefficients (MFCC) has been used which gives a set of feature vectors from recorded speech samples. Neural network model is used to enhance the recognition performance. Feed forward neural network with back propagation algorithm model is used. However other speech recognition techniques such as HMM, DTW exist. All experiments are carried out on Matlab.

Color Image Segmentation Using SVM Pixel Classification Image

The goal of image segmentation is to cluster pixels into salient image regions. Segmentation could be used for object recognition, occlusion boundary estimation within motion or stereo systems, image compression, image editing, or image database lookup. In this paper, we present a color image segmentation using support vector machine (SVM) pixel classification. Firstly, the pixel level color and texture features of the image are extracted and they are used as input to the SVM classifier. These features are extracted using the homogeneity model and Gabor Filter. With the extracted pixel level features, the SVM Classifier is trained by using FCM (Fuzzy C-Means).The image segmentation takes the advantage of both the pixel level information of the image and also the ability of the SVM Classifier. The Experiments show that the proposed method has a very good segmentation result and a better efficiency, increases the quality of the image segmentation compared with the other segmentation methods proposed in the literature.

Formal Verification of Cache System Using a Novel Cache Memory Model

Formal verification is proposed to ensure the correctness of the design and make functional verification more efficient. As cache plays a vital role in the design of System on Chip (SoC), and cache with Memory Management Unit (MMU) and cache memory unit makes the state space too large for simulation to verify, then a formal verification is presented for such system design. In the paper, a formal model checking verification flow is suggested and a new cache memory model which is called “exhaustive search model” is proposed. Instead of using large size ram to denote the whole cache memory, exhaustive search model employs just two cache blocks. For cache system contains data cache (Dcache) and instruction cache (Icache), Dcache memory model and Icache memory model are established separately using the same mechanism. At last, the novel model is employed to the verification of a cache which is module of a custom-built SoC system that has been applied in practical, and the result shows that the cache system is verified correctly using the exhaustive search model, and it makes the verification much more manageable and flexible.

A Performance Analysis of Different Scheduling Schemes in WiMAX

IEEE 802.16 (WiMAX) aims to present high speed wireless access to cover wide range coverage. The base station (BS) and the subscriber station (SS) are the main parts of WiMAX. WiMAX uses either Point-to-Multipoint (PMP) or mesh topologies. In the PMP mode, the SSs connect to the BS to gain access to the network. However, in the mesh mode, the SSs connect to each other to gain access to the BS. The main components of QoS management in the 802.16 standard are the admission control, buffer management and packet scheduling. In this paper, we use QualNet 5.0.2 to study the performance of different scheduling schemes, such as WFQ, SCFQ, RR and SP when the numbers of SSs increase. We find that when the number of SSs increases, the average jitter and average end-to-end delay is increased and the throughput is reduced.

Building an Interactive Web-Based GIS System for Planning of Geological Survey Works

The planning of geological survey works is an iterative process which involves planner, geologist, civil engineer and other stakeholders, who perform different roles and have different points of view. Traditionally, the team used paper maps or CAD drawings to present the proposal which is not an efficient way to present and share idea on the site investigation proposal such as sitting of borehole location or seismic survey lines. This paper focuses on how a GIS approach can be utilised to develop a webbased system to support decision making process in the planning of geological survey works and also to plan site activities carried out by Singapore Geological Office (SGO). The authors design a framework of building an interactive web-based GIS system, and develop a prototype, which enables the users to obtain rapidly existing geological information and also to plan interactively borehole locations and seismic survey lines via a web browser. This prototype system is used daily by SGO and has shown to be effective in increasing efficiency and productivity as the time taken in the planning of geological survey works is shortened. The prototype system has been developed using the ESRI ArcGIS API 3.7 for Flex which is based on the ArcGIS 10.2.1 platform.

An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud

In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.

Bee Optimized Fuzzy Geographical Routing Protocol for VANET

Vehicular Adhoc Network (VANET) is a new technology which aims to ensure intelligent inter-vehicle communications, seamless internet connectivity leading to improved road safety, essential alerts, and access to comfort and entertainment. VANET operations are hindered by mobile node’s (vehicles) uncertain mobility. Routing algorithms use metrics to evaluate which path is best for packets to travel. Metrics like path length (hop count), delay, reliability, bandwidth, and load determine optimal route. The proposed scheme exploits link quality, traffic density, and intersections as routing metrics to determine next hop. This study enhances Geographical Routing Protocol (GRP) using fuzzy controllers while rules are optimized with Bee Swarm Optimization (BSO). Simulations results are compared to conventional GRP.