Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

XML Integration of Data from CloudSat Satellite and GMS-6 Water Vapor Satellite

This study aimed at developing visualization tools for integrating CloudSat images and Water Vapor Satellite images. KML was used for integrating data from CloudSat Satellite and GMS-6 Water Vapor Satellite. CloudSat 2D images were transformed into 3D polygons in order to achieve 3D images. Before overlaying the images on Google Earth, GMS-6 water vapor satellite images had to be rescaled into linear images. Web service was developed using webMathematica. Shoreline from GMS-6 images was compared with shoreline from LandSat images on Google Earth for evaluation. The results showed that shoreline from GMS-6 images was highly matched with the shoreline in LandSat images from Google Earth. For CloudSat images, the visualizations were compared with GMS-6 images on Google Earth. The results showed that CloudSat and GMS-6 images were highly correlated.

Forecasting Fraudulent Financial Statements using Data Mining

This paper explores the effectiveness of machine learning techniques in detecting firms that issue fraudulent financial statements (FFS) and deals with the identification of factors associated to FFS. To this end, a number of experiments have been conducted using representative learning algorithms, which were trained using a data set of 164 fraud and non-fraud Greek firms in the recent period 2001-2002. The decision of which particular method to choose is a complicated problem. A good alternative to choosing only one method is to create a hybrid forecasting system incorporating a number of possible solution methods as components (an ensemble of classifiers). For this purpose, we have implemented a hybrid decision support system that combines the representative algorithms using a stacking variant methodology and achieves better performance than any examined simple and ensemble method. To sum up, this study indicates that the investigation of financial information can be used in the identification of FFS and underline the importance of financial ratios.

Exploring the Application of Knowledge Management Factors in Esfahan University's Medical College

In this competitive age, one of the key tools of most successful organizations is knowledge management. Today some organizations measure their current knowledge and use it as an indicator for rating the organization on their reports. Noting that the universities and colleges of medical science have a great role in public health of societies, their access to newest scientific research and the establishment of organizational knowledge management systems is very important. In order to explore the Application of Knowledge Management Factors, a national study was undertaken. The main purpose of this study was to find the rate of the application of knowledge management factors and some ways to establish more application of knowledge management system in Esfahan University-s Medical College (EUMC). Esfahan is the second largest city after Tehran, the capital city of Iran, and the EUMC is the biggest medical college in Esfahan. To rate the application of knowledge management, this study uses a quantitative research methodology based on Probst, Raub and Romhardt model of knowledge management. A group of 267 faculty members and staff of the EUMC were asked via questionnaire. Finding showed that the rate of the application of knowledge management factors in EUMC have been lower than average. As a result, an interview with ten faculty members conducted to find the guidelines to establish more applications of knowledge management system in EUMC.

Power Forecasting of Photovoltaic Generation

Photovoltaic power generation forecasting is an important task in renewable energy power system planning and operating. This paper explores the application of neural networks (NN) to study the design of photovoltaic power generation forecasting systems for one week ahead using weather databases include the global irradiance, and temperature of Ghardaia city (south of Algeria) using a data acquisition system. Simulations were run and the results are discussed showing that neural networks Technique is capable to decrease the photovoltaic power generation forecasting error.

Automatic Clustering of Gene Ontology by Genetic Algorithm

Nowadays, Gene Ontology has been used widely by many researchers for biological data mining and information retrieval, integration of biological databases, finding genes, and incorporating knowledge in the Gene Ontology for gene clustering. However, the increase in size of the Gene Ontology has caused problems in maintaining and processing them. One way to obtain their accessibility is by clustering them into fragmented groups. Clustering the Gene Ontology is a difficult combinatorial problem and can be modeled as a graph partitioning problem. Additionally, deciding the number k of clusters to use is not easily perceived and is a hard algorithmic problem. Therefore, an approach for solving the automatic clustering of the Gene Ontology is proposed by incorporating cohesion-and-coupling metric into a hybrid algorithm consisting of a genetic algorithm and a split-and-merge algorithm. Experimental results and an example of modularized Gene Ontology in RDF/XML format are given to illustrate the effectiveness of the algorithm.

Emotional Intelligence and Retention: The Moderating Role of Job Involvement

The main aim of the current study was to examine the effect of emotional intelligence on retention. The study also aimed at analyzing the role of job involvement, as a moderator, in the effect of emotional intelligence on retention. Using data gathered from 241 employees working with hotels and tourism corporations listed in Amman Stock Exchange in Jordan, emotional intelligence, job involvement and retention were measured. Hierarchical regression analyses were used to test the three main hypotheses. Results indicated that retention was related to emotional intelligence. Moreover, the study yielded support for the claim that job involvement had a moderating effect on the relationship between emotional intelligence and retention.

Application of Process Approach to Evaluate the Information Security Risk and its Implementation in an Iranian Private Bank

Every organization is continually subject to new damages and threats which can be resulted from their operations or their goal accomplishment. Methods of providing the security of space and applied tools have been widely changed with increasing application and development of information technology (IT). From this viewpoint, information security management systems were evolved to construct and prevent reiterating the experienced methods. In general, the correct response in information security management systems requires correct decision making, which in turn requires the comprehensive effort of managers and everyone involved in each plan or decision making. Obviously, all aspects of work or decision are not defined in all decision making conditions; therefore, the possible or certain risks should be considered when making decisions. This is the subject of risk management and it can influence the decisions. Investigation of different approaches in the field of risk management demonstrates their progress from quantitative to qualitative methods with a process approach.

Compromise Ratio Method for Decision Making under Fuzzy Environment using Fuzzy Distance Measure

The aim of this paper is to adopt a compromise ratio (CR) methodology for fuzzy multi-attribute single-expert decision making proble. In this paper, the rating of each alternative has been described by linguistic terms, which can be expressed as triangular fuzzy numbers. The compromise ratio method for fuzzy multi-attribute single expert decision making has been considered here by taking the ranking index based on the concept that the chosen alternative should be as close as possible to the ideal solution and as far away as possible from the negative-ideal solution simultaneously. From logical point of view, the distance between two triangular fuzzy numbers also is a fuzzy number, not a crisp value. Therefore a fuzzy distance measure, which is itself a fuzzy number, has been used here to calculate the difference between two triangular fuzzy numbers. Now in this paper, with the help of this fuzzy distance measure, it has been shown that the compromise ratio is a fuzzy number and this eases the problem of the decision maker to take the decision. The computation principle and the procedure of the compromise ratio method have been described in detail in this paper. A comparative analysis of the compromise ratio method previously proposed [1] and the newly adopted method have been illustrated with two numerical examples.

An Automation of Check Focusing on CRUD for Requirements Analysis Model in UML

A key to success of high quality software development is to define valid and feasible requirements specification. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface mock-up from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the mock-up. This paper proposes a support method to check the validity of a data life cycle by using a model checking tool “UPPAAL" focusing on CRUD (Create, Read, Update and Delete). Exhaustive checking improves the quality of requirements analysis model which are validated by the customers through automatically generated mock-up. The effectiveness of our method is discussed by a case study of requirements modeling of two small projects which are a library management system and a supportive sales system for text books in a university.

Economic effects and Energy Use Efficiency of Incorporating Alfalfa and Fertilizer into Grass- Based Pasture Systems

A ten-year grazing study was conducted at the Agriculture and Agri-Food Canada Brandon Research Centre in Manitoba to study the effect of alfalfa inclusion and fertilizer (N, P, K, and S) addition on economics and efficiency of non-renewable energy use in meadow brome grass-based pasture systems for beef production. Fertilizing grass-only or alfalfa-grass pastures to full soil test recommendations improved pasture productivity, but did not improve profitability compared to unfertilized pastures. Fertilizing grass-only pastures resulted in the highest net loss of any pasture management strategy in this study. Adding alfalfa at the time of seeding, with no added fertilizer, was economically the best pasture improvement strategy in this study. Because of moisture limitations, adding commercial fertilizer to full soil test recommendations is probably not economically justifiable in most years, especially with the rising cost of fertilizer. Improving grass-only pastures by adding fertilizer and/or alfalfa required additional non-renewable energy inputs; however, the additional energy required for unfertilized alfalfa-grass pastures was minimal compared to the fertilized pastures. Of the four pasture management strategies, adding alfalfa to grass pastures without adding fertilizer had the highest efficiency of energy use. Based on energy use and economic performance, the unfertilized alfalfa-grass pasture was the most efficient and sustainable pasture system.

A Training Model for Successful Implementation of Enterprise Resource Planning

It well recognized that one feature that makes a successful company is its ability to successfully align its business goals with its information communication technologies platform. Enterprise Resource Planning (ERP) systems contribute to achieve better performance by integrating various business functions and providing support for information flows. However, the technological systems complexity is known to prevent the business users to exploit in an efficient way the Enterprise Resource Planning Systems (ERP). This paper aims to investigate the role of training in improving the usage of ERP systems. To this end, we have designed an instrument survey to employees of a Norwegian multinational global provider of technology solutions. Based on the analysis of collected data, we have delineated a training model that could be high relevance for both researchers and practitioners as a step towards a better understanding of ERP system implementation.

An Agent Based Dynamic Resource Scheduling Model with FCFS-Job Grouping Strategy in Grid Computing

Grid computing is a group of clusters connected over high-speed networks that involves coordinating and sharing computational power, data storage and network resources operating across dynamic and geographically dispersed locations. Resource management and job scheduling are critical tasks in grid computing. Resource selection becomes challenging due to heterogeneity and dynamic availability of resources. Job scheduling is a NP-complete problem and different heuristics may be used to reach an optimal or near optimal solution. This paper proposes a model for resource and job scheduling in dynamic grid environment. The main focus is to maximize the resource utilization and minimize processing time of jobs. Grid resource selection strategy is based on Max Heap Tree (MHT) that best suits for large scale application and root node of MHT is selected for job submission. Job grouping concept is used to maximize resource utilization for scheduling of jobs in grid computing. Proposed resource selection model and job grouping concept are used to enhance scalability, robustness, efficiency and load balancing ability of the grid.

Spatial Structure and Process of Arctic Warming and Land Cover Change in the Feedback Systems Framework

This paper examines the relationships between and among the various drivers of climate change that have both climatic and ecological consequences for vegetation and land cover change in arctic areas, particularly in arctic Alaska. It discusses the various processes that have created spatial and climatic structures that have facilitated observable vegetation and land cover changes in the Arctic. Also, it indicates that the drivers of both climatic and ecological changes in the Arctic are multi-faceted and operate in a system with both positive and negative feedbacks that largely results in further increases or decreases of the initial drivers of climatic and vegetation change mainly at the local and regional scales. It demonstrates that the impact of arctic warming on land cover change and the Arctic ecosystems is not unidirectional and one dimensional in nature but it represents a multi-directional and multi-dimensional forces operating in a feedback system.

Synthesis and Characterization of PEG-Silane Functionalized Iron Oxide Nanoparticle as MRI T2 Contrast Agent

Iron oxide nanoparticle was synthesized by reactive-precipitation method followed by high speed centrifuge and phase transfer in order to stabilized nanoparticles in the solvent. Particle size of SPIO was 8.2 nm by SEM, and the hydraulic radius was 17.5 nm by dynamic light scattering method. Coercivity and saturated magnetism were determined by VSM (vibrating sample magnetometer), coercivity of nanoparticle was lower than 10 Hc, and the saturated magnetism was higher than 65 emu/g. Stabilized SPIO was then transferred to aqueous phase by reacted with excess amount of poly (ethylene glycol) (PEG) silane. After filtration and dialysis, the SPIO T2 contrast agent was ready to use. The hydraulic radius of final product was about 70~100 nm, the relaxation rates R2 (1/T2) measured by magnetic resonance imaging (MRI) was larger than 200(sec-1).

Visualisation and Navigation in Large Scale P2P Service Networks

In Peer-to-Peer service networks, where peers offer any kind of publicly available services or applications, intuitive navigation through all services in the network becomes more difficult as the number of services increases. In this article, a concept is discussed that enables users to intuitively browse and use large scale P2P service networks. The concept extends the idea of creating virtual 3D-environments solely based on Peer-to-Peer technologies. Aside from browsing, users shall have the possibility to emphasize services of interest using their own semantic criteria. The appearance of the virtual world shall intuitively reflect network properties that may be of interest for the user. Additionally, the concept comprises options for load- and traffic-balancing. In this article, the requirements concerning the underlying infrastructure and the graphical user interface are defined. First impressions of the appearance of future systems are presented and the next steps towards a prototypical implementation are discussed.

Rigorous Modeling of Fixed-Bed Reactors Containing Finite Hollow Cylindrical Catalyst with Michaelis-Menten Type of Kinetics

A large number of chemical, bio-chemical and pollution-control processes use heterogeneous fixed-bed reactors. The use of finite hollow cylindrical catalyst pellets can enhance conversion levels in such reactors. The absence of the pellet core can significantly lower the diffusional resistance associated with the solid phase. This leads to a better utilization of the catalytic material, which is reflected in the higher values for the effectiveness factor, leading ultimately to an enhanced conversion level in the reactor. It is however important to develop a rigorous heterogeneous model for the reactor incorporating the two-dimensional feature of the solid phase owing to the presence of the finite hollow cylindrical catalyst pellet. Presently, heterogeneous models reported in the literature invariably employ one-dimension solid phase models meant for spherical catalyst pellets. The objective of the paper is to present a rigorous model of the fixed-bed reactors containing finite hollow cylindrical catalyst pellets. The reaction kinetics considered here is the widely used Michaelis–Menten kinetics for the liquid-phase bio-chemical reactions. The reaction parameters used here are for the enzymatic degradation of urea. Results indicate that increasing the height to diameter ratio helps to improve the conversion level. On the other hand, decreasing the thickness is apparently not as effective. This could however be explained in terms of the higher void fraction of the bed that causes a smaller amount of the solid phase to be packed in the fixed-bed bio-chemical reactor.

Device for 3D Analysis of Basic Movements of the Lower Extremity

This document details the process of developing a wireless device that captures the basic movements of the foot (plantar flexion, dorsal flexion, abduction, adduction.), and the knee movement (flexion). It implements a motion capture system by using a hardware based on optical fiber sensors, due to the advantages in terms of scope, noise immunity and speed of data transmission and reception. The operating principle used by this system is the detection and transmission of joint movement by mechanical elements and their respective measurement by optical ones (in this case infrared). Likewise, Visual Basic software is used for reception, analysis and signal processing of data acquired by the device, generating a 3D graphical representation in real time of each movement. The result is a boot in charge of capturing the movement, a transmission module (Implementing Xbee Technology) and a receiver module for receiving information and sending it to the PC for their respective processing. The main idea with this device is to help on topics such as bioengineering and medicine, by helping to improve the quality of life and movement analysis.

Weka Based Desktop Data Mining as Web Service

Data mining is the process of sifting through large volumes of data, analyzing data from different perspectives and summarizing it into useful information. One of the widely used desktop applications for data mining is the Weka tool which is nothing but a collection of machine learning algorithms implemented in Java and open sourced under the General Public License (GPL). A web service is a software system designed to support interoperable machine to machine interaction over a network using SOAP messages. Unlike a desktop application, a web service is easy to upgrade, deliver and access and does not occupy any memory on the system. Keeping in mind the advantages of a web service over a desktop application, in this paper we are demonstrating how this Java based desktop data mining application can be implemented as a web service to support data mining across the internet.

Application of Whole Genome Amplification Technique for Genotype Analysis of Bovine Embryos

In recent years, there has been an increasing interest toward the use of bovine genotyped embryos for commercial embryo transfer programs. Biopsy of a few cells in morulla stage is essential for preimplantation genetic diagnosis (PGD). Low amount of DNA have limited performing the several molecular analyses within PGD analyses. Whole genome amplification (WGA) promises to eliminate this problem. We evaluated the possibility and performance of an improved primer extension preamplification (I-PEP) method with a range of starting bovine genomic DNA from 1-8 cells into the WGA reaction. We optimized a short and simple I-PEP (ssI-PEP) procedure (~3h). This optimized WGA method was assessed by 6 loci specific polymerase chain reactions (PCRs), included restriction fragments length polymorphism (RFLP). Optimized WGA procedure possesses enough sensitivity for molecular genetic analyses through the few input cells. This is a new era for generating characterized bovine embryos in preimplantation stage.