An Off-the-Shelf Scheme for Dependable Grid Systems Using Virtualization

Recently, grid computing has been widely focused on the science, industry, and business fields, which are required a vast amount of computing. Grid computing is to provide the environment that many nodes (i.e., many computers) are connected with each other through a local/global network and it is available for many users. In the environment, to achieve data processing among nodes for any applications, each node executes mutual authentication by using certificates which published from the Certificate Authority (for short, CA). However, if a failure or fault has occurred in the CA, any new certificates cannot be published from the CA. As a result, a new node cannot participate in the gird environment. In this paper, an off-the-shelf scheme for dependable grid systems using virtualization techniques is proposed and its implementation is verified. The proposed approach using the virtualization techniques is to restart an application, e.g., the CA, if it has failed. The system can tolerate a failure or fault if it has occurred in the CA. Since the proposed scheme is implemented at the application level easily, the cost of its implementation by the system builder hardly takes compared it with other methods. Simulation results show that the CA in the system can recover from its failure or fault.

A Hybrid Approach for Selection of Relevant Features for Microarray Datasets

Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.

A GA-Based Role Assignment Approach for Web-based Cooperative Learning Environments

Web-based cooperative learning focuses on (1) the interaction and the collaboration of community members, and (2) the sharing and the distribution of knowledge and expertise by network technology to enhance learning performance. Numerous research literatures related to web-based cooperative learning have demonstrated that cooperative scripts have a positive impact to specify, sequence, and assign cooperative learning activities. Besides, literatures have indicated that role-play in web-based cooperative learning environments enhances two or more students to work together toward the completion of a common goal. Since students generally do not know each other and they lack the face-to-face contact that is necessary for the negotiation of assigning group roles in web-based cooperative learning environments, this paper intends to further extend the application of genetic algorithm (GA) and propose a GA-based algorithm to tackle the problem of role assignment in web-based cooperative learning environments, which not only saves communication costs but also reduces conflict between group members in negotiating role assignments.

An Analysis of Blackouts for Electric Power Transmission Systems

In this paper an analysis of blackouts in electric power transmission systems is implemented using a model and studied in simple networks with a regular topology. The proposed model describes load demand and network improvements evolving on a slow timescale as well as the fast dynamics of cascading overloads and outages.

Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

Investigation of Water Vapour Transport Properties of Gypsum Using Genetic Algorithm

Water vapour transport properties of gypsum block are studied in dependence on relative humidity using inverse analysis based on genetic algorithm. The computational inverse analysis is performed for the relative humidity profiles measured along the longitudinal axis of a rod sample. Within the performed transient experiment, the studied sample is exposed to two environments with different relative humidity, whereas the temperature is kept constant. For the basic gypsum characterisation and for the assessment of input material parameters necessary for computational application of genetic algorithm, the basic material properties of gypsum are measured as well as its thermal and water vapour storage parameters. On the basis of application of genetic algorithm, the relative humidity dependent water vapour diffusion coefficient and water vapour diffusion resistance factor are calculated.

Using Partnerships to Achieve National Goals

Ireland developed a National Strategy 2030 that argued for the creation of a new form of higher education institution, a Technological University. The research reported here reviews the first stage of this partnership development. The study found that national policy can create system capacity and change, but that individual partners may have more to gain or lose in collaborating. When presented as a zero-sum activity, fear among partners is high. The level of knowledge and networking within the higher education system possessed by each partner contributed to decisions to participate or not in a joint proposal for collaboration. Greater success resulted when there were gains for all partners. This research concludes that policy mandates can provide motivation to collaborate, but that the partnership needs to be built more on shared values versus coercion by mandates.

A New Scheduling Algorithm Based on Traffic Classification Using Imprecise Computation

Wireless channels are characterized by more serious bursty and location-dependent errors. Many packet scheduling algorithms have been proposed for wireless networks to guarantee fairness and delay bounds. However, most existing schemes do not consider the difference of traffic natures among packet flows. This will cause the delay-weight coupling problem. In particular, serious queuing delays may be incurred for real-time flows. In this paper, it is proposed a scheduling algorithm that takes traffic types of flows into consideration when scheduling packets and also it is provided scheduling flexibility by trading off video quality to meet the playback deadline.

Outsourcing Opportunities for Internet Banking Solutions

The main goal of the article is to present new model of application architecture of banking IT solution providing the Internet Banking services that is particularly outsourced. At first, we propose business rationale and a SWOT analysis to explain the reasons for the model in the article. The most important factor for our model is nowadays- big boom around smart phones and tablet devices. As next, we focus on IT architecture viewpoint where we design application, integration and security model. Finally, we propose a generic governance model that serves as a basis for the specialized governance model. The specialized instance of governance model is designed to ensure that the development and the maintenance of different parts of the IT solution are well governed in time.

Pervasive Differentiated Services: A QoS Model for Pervasive Systems

In this article, we introduce a mechanism by which the same concept of differentiated services used in network transmission can be applied to provide quality of service levels to pervasive systems applications. The classical DiffServ model, including marking and classification, assured forwarding, and expedited forwarding, are all utilized to create quality of service guarantees for various pervasive applications requiring different levels of quality of service. Through a collection of various sensors, personal devices, and data sources, the transmission of contextsensitive data can automatically occur within a pervasive system with a given quality of service level. Triggers, initiators, sources, and receivers are four entities labeled in our mechanism. An explanation of the role of each is provided, and how quality of service is guaranteed.

Intelligent Audio Watermarking using Genetic Algorithm in DWT Domain

In this paper, an innovative watermarking scheme for audio signal based on genetic algorithms (GA) in the discrete wavelet transforms is proposed. It is robust against watermarking attacks, which are commonly employed in literature. In addition, the watermarked image quality is also considered. We employ GA for the optimal localization and intensity of watermark. The watermark detection process can be performed without using the original audio signal. The experimental results demonstrate that watermark is inaudible and robust to many digital signal processing, such as cropping, low pass filter, additive noise.

ANN-Based Classification of Indirect Immuno Fluorescence Images

In this paper we address the issue of classifying the fluorescent intensity of a sample in Indirect Immuno-Fluorescence (IIF). Since IIF is a subjective, semi-quantitative test in its very nature, we discuss a strategy to reliably label the image data set by using the diagnoses performed by different physicians. Then, we discuss image pre-processing, feature extraction and selection. Finally, we propose two ANN-based classifiers that can separate intrinsically dubious samples and whose error tolerance can be flexibly set. Measured performance shows error rates less than 1%, which candidates the method to be used in daily medical practice either to perform pre-selection of cases to be examined, or to act as a second reader.

Face Recognition Using Morphological Shared-weight Neural Networks

We introduce an algorithm based on the morphological shared-weight neural network. Being nonlinear and translation-invariant, the MSNN can be used to create better generalization during face recognition. Feature extraction is performed on grayscale images using hit-miss transforms that are independent of gray-level shifts. The output is then learned by interacting with the classification process. The feature extraction and classification networks are trained together, allowing the MSNN to simultaneously learn feature extraction and classification for a face. For evaluation, we test for robustness under variations in gray levels and noise while varying the network-s configuration to optimize recognition efficiency and processing time. Results show that the MSNN performs better for grayscale image pattern classification than ordinary neural networks.

The Using Artificial Neural Network to Estimate of Chemical Oxygen Demand

Nowadays, the increase of human population every year results in increasing of water usage and demand. Saen Saep canal is important canal in Bangkok. The main objective of this study is using Artificial Neural Network (ANN) model to estimate the Chemical Oxygen Demand (COD) on data from 11 sampling sites. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2007-2011. The twelve parameters of water quality are used as the input of the models. These water quality indices affect the COD. The experimental results indicate that the ANN model provides a high correlation coefficient (R=0.89).

Multiple Subcarrier Indoor Geolocation System in MIMO-OFDM WLAN APs Structure

This report aims to utilize existing and future Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing Wireless Local Area Network (MIMO-OFDM WLAN) systems characteristics–such as multiple subcarriers, multiple antennas, and channel estimation characteristics–for indoor location estimation systems based on the Direction of Arrival (DOA) and Radio Signal Strength Indication (RSSI) methods. Hybrid of DOA-RSSI methods also evaluated. In the experimental data result, we show that location estimation accuracy performances can be increased by minimizing the multipath fading effect. This is done using multiple subcarrier frequencies over wideband frequencies to estimate one location. The proposed methods are analyzed in both a wide indoor environment and a typical room-sized office. In the experiments, WLAN terminal locations are estimated by measuring multiple subcarriers from arrays of three dipole antennas of access points (AP). This research demonstrates highly accurate, robust and hardware-free add-on software for indoor location estimations based on a MIMO-OFDM WLAN system.

Management of Multimedia Contents for Distributed e-Learning System

We have developed a distributed asynchronous Web based training system. In order to improve the scalability and robustness of this system, all contents and functions are realized on mobile agents. These agents are distributed to computers, and they can use a Peer to Peer network that modified Content-Addressable Network. In the proposed system, only text data can be included in a exercise. To make our proposed system more useful, the mechanism that it not only adapts to multimedia data but also it doesn-t influence the user-s learning even if the size of exercise becomes large is necessary.

Performance Evaluation of Routing Protocols For High Density Ad Hoc Networks based on Qos by GlomoSim Simulator

Ad hoc networks are characterized by multihop wireless connectivity, frequently changing network topology and the need for efficient dynamic routing protocols. We compare the performance of three routing protocols for mobile ad hoc networks: Dynamic Source Routing (DSR) , Ad Hoc On-Demand Distance Vector Routing (AODV), location-aided routing(LAR1).The performance differentials are analyzed using varying network load, mobility, and network size. We simulate protocols with GLOMOSIM simulator. Based on the observations, we make recommendations about when the performance of either protocol can be best.

Improving Co-integration Trading Rule Profitability with Forecasts from an Artificial Neural Network

Co-integration models the long-term, equilibrium relationship of two or more related financial variables. Even if cointegration is found, in the short run, there may be deviations from the long run equilibrium relationship. The aim of this work is to forecast these deviations using neural networks and create a trading strategy based on them. A case study is used: co-integration residuals from Australian Bank Bill futures are forecast and traded using various exogenous input variables combined with neural networks. The choice of the optimal exogenous input variables chosen for each neural network, undertaken in previous work [1], is validated by comparing the forecasts and corresponding profitability of each, using a trading strategy.

MONARC: A Case Study on Simulation Analysis for LHC Activities

The scale, complexity and worldwide geographical spread of the LHC computing and data analysis problems are unprecedented in scientific research. The complexity of processing and accessing this data is increased substantially by the size and global span of the major experiments, combined with the limited wide area network bandwidth available. We present the latest generation of the MONARC (MOdels of Networked Analysis at Regional Centers) simulation framework, as a design and modeling tool for large scale distributed systems applied to HEP experiments. We present simulation experiments designed to evaluate the capabilities of the current real-world distributed infrastructure to support existing physics analysis processes and the means by which the experiments bands together to meet the technical challenges posed by the storage, access and computing requirements of LHC data analysis within the CMS experiment.

An Autonomous Collaborative Forecasting System Implementation – The First Step towards Successful CPFR System

In the past decade, artificial neural networks (ANNs) have been regarded as an instrument for problem-solving and decision-making; indeed, they have already done with a substantial efficiency and effectiveness improvement in industries and businesses. In this paper, the Back-Propagation neural Networks (BPNs) will be modulated to demonstrate the performance of the collaborative forecasting (CF) function of a Collaborative Planning, Forecasting and Replenishment (CPFR®) system. CPFR functions the balance between the sufficient product supply and the necessary customer demand in a Supply and Demand Chain (SDC). Several classical standard BPN will be grouped, collaborated and exploited for the easy implementation of the proposed modular ANN framework based on the topology of a SDC. Each individual BPN is applied as a modular tool to perform the task of forecasting SKUs (Stock-Keeping Units) levels that are managed and supervised at a POS (point of sale), a wholesaler, and a manufacturer in an SDC. The proposed modular BPN-based CF system will be exemplified and experimentally verified using lots of datasets of the simulated SDC. The experimental results showed that a complex CF problem can be divided into a group of simpler sub-problems based on the single independent trading partners distributed over SDC, and its SKU forecasting accuracy was satisfied when the system forecasted values compared to the original simulated SDC data. The primary task of implementing an autonomous CF involves the study of supervised ANN learning methodology which aims at making “knowledgeable" decision for the best SKU sales plan and stocks management.