Methane and Other Hydrocarbon Gas Emissions Resulting from Flaring in Kuwait Oilfields

Air pollution is a major environmental health problem, affecting developed and developing countries around the world. Increasing amounts of potentially harmful gases and particulate matter are being emitted into the atmosphere on a global scale, resulting in damage to human health and the environment. Petroleum-related air pollutants can have a wide variety of adverse environmental impacts. In the crude oil production sectors, there is a strong need for a thorough knowledge of gaseous emissions resulting from the flaring of associated gas of known composition on daily basis through combustion activities under several operating conditions. This can help in the control of gaseous emission from flares and thus in the protection of their immediate and distant surrounding against environmental degradation. The impacts of methane and non-methane hydrocarbons emissions from flaring activities at oil production facilities at Kuwait Oilfields have been assessed through a screening study using records of flaring operations taken at the gas and oil production sites, and by analyzing available meteorological and air quality data measured at stations located near anthropogenic sources. In the present study the Industrial Source Complex (ISCST3) Dispersion Model is used to calculate the ground level concentrations of methane and nonmethane hydrocarbons emitted due to flaring in all over Kuwait Oilfields. The simulation of real hourly air quality in and around oil production facilities in the State of Kuwait for the year 2006, inserting the respective source emission data into the ISCST3 software indicates that the levels of non-methane hydrocarbons from the flaring activities exceed the allowable ambient air standard set by Kuwait EPA. So, there is a strong need to address this acute problem to minimize the impact of methane and non-methane hydrocarbons released from flaring activities over the urban area of Kuwait.

Analysis of Statistical Data on Social Resources Dimension of Occupational Status Attainment: A Rational Choice Approach

The aim of the present study is to analyze empirical researches on the social resources dimension of occupational status attainment process and relate them to the rational choice approach. The analysis suggests that the existing data on the strength of ties aspect of social resources is insufficient and does not allow any implication concerning rational actor-s behavior. However, the results concerning work relation aspect are more encouraging.

A Materialized Approach to the Integration of XML Documents: the OSIX System

The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.

Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm

With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.

Dynamic Models versus Frailty Models for Recurrent Event Data

Recurrent event data is a special type of multivariate survival data. Dynamic and frailty models are one of the approaches that dealt with this kind of data. A comparison between these two models is studied using the empirical standard deviation of the standardized martingale residual processes as a way of assessing the fit of the two models based on the Aalen additive regression model. Here we found both approaches took heterogeneity into account and produce residual standard deviations close to each other both in the simulation study and in the real data set.

Geovisualization of Tourist Activity Travel Patterns Using 3D GIS: An Empirical Study of Tamsui, Taiwan

The study of tourist activities and the mapping of their routes in space and time has become an important issue in tourism management. Here we represent space-time paths for the tourism industry by visualizing individual tourist activities and the paths followed using a 3D Geographic Information System (GIS). Considerable attention has been devoted to the measurement of accessibility to shopping, eating, walking and other services at the tourist destination. I turns out that GIS is a useful tool for studying the spatial behaviors of tourists in the area. The value of GIS is especially advantageous for space-time potential path area measures, especially for the accurate visualization of possible paths through existing city road networks. This study seeks to apply space-time concepts with a detailed street network map obtained from Google Maps to measure tourist paths both spatially and temporally. These paths are further determined based on data obtained from map questionnaires regarding the trip activities of 40 individuals. The analysis of the data makes it possible to determining the locations of the more popular paths. The results can be visualized using 3D GIS to show the areas and potential activity opportunities accessible to tourists during their travel time.

ML Detection with Symbol Estimation for Nonlinear Distortion of OFDM Signal

In this paper, a new technique of signal detection has been proposed for detecting the orthogonal frequency-division multiplexing (OFDM) signal in the presence of nonlinear distortion.There are several advantages of OFDM communications system.However, one of the existing problems is remain considered as the nonlinear distortion generated by high-power-amplifier at the transmitter end due to the large dynamic range of an OFDM signal. The proposed method is the maximum likelihood detection with the symbol estimation. When the training data are available, the neural network has been used to learn the characteristic of received signal and to estimate the new positions of the transmitted symbol which are provided to the maximum likelihood detector. Resulting in the system performance, the nonlinear distortions of a traveling wave tube amplifier with OFDM signal are considered in this paper.Simulation results of the bit-error-rate performance are obtained with 16-QAM OFDM systems.

Teaching Approach and Self-Confidence Effect Model Consistency between Taiwan and Singapore Multi-Group HLM

This study was conducted to explore the effects of two countries model comparison program in Taiwan and Singapore in TIMSS database. The researchers used Multi-Group Hierarchical Linear Modeling techniques to compare the effects of two different country models and we tested our hypotheses on 4,046 Taiwan students and 4,599 Singapore students in 2007 at two levels: the class level and student (individual) level. Design quality is a class level variable. Student level variables are achievement and self-confidence. The results challenge the widely held view that retention has a positive impact on self-confidence. Suggestions for future research are discussed.

Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Brain MRI Segmentation and Lesions Detection by EM Algorithm

In Multiple Sclerosis, pathological changes in the brain results in deviations in signal intensity on Magnetic Resonance Images (MRI). Quantitative analysis of these changes and their correlation with clinical finding provides important information for diagnosis. This constitutes the objective of our work. A new approach is developed. After the enhancement of images contrast and the brain extraction by mathematical morphology algorithm, we proceed to the brain segmentation. Our approach is based on building statistical model from data itself, for normal brain MRI and including clustering tissue type. Then we detect signal abnormalities (MS lesions) as a rejection class containing voxels that are not explained by the built model. We validate the method on MR images of Multiple Sclerosis patients by comparing its results with those of human expert segmentation.

Flexible Communication Platform for Crisis Management

Topics Disaster and Emergency Management are highly debated among experts. Fast communication will help to deal with emergencies. Problem is with the network connection and data exchange. The paper suggests a solution, which allows possibilities and perspectives of new flexible communication platform to the protection of communication systems for crisis management. This platform is used for everyday communication and communication in crisis situations too.

GIS-based Non-point Sources of Pollution Simulation in Cameron Highlands, Malaysia

Cameron Highlands is a mountainous area subjected to torrential tropical showers. It extracts 5.8 million liters of water per day for drinking supply from its rivers at several intake points. The water quality of rivers in Cameron Highlands, however, has deteriorated significantly due to land clearing for agriculture, excessive usage of pesticides and fertilizers as well as construction activities in rapidly developing urban areas. On the other hand, these pollution sources known as non-point pollution sources are diverse and hard to identify and therefore they are difficult to estimate. Hence, Geographical Information Systems (GIS) was used to provide an extensive approach to evaluate landuse and other mapping characteristics to explain the spatial distribution of non-point sources of contamination in Cameron Highlands. The method to assess pollution sources has been developed by using Cameron Highlands Master Plan (2006-2010) for integrating GIS, databases, as well as pollution loads in the area of study. The results show highest annual runoff is created by forest, 3.56 × 108 m3/yr followed by urban development, 1.46 × 108 m3/yr. Furthermore, urban development causes highest BOD load (1.31 × 106 kgBOD/yr) while agricultural activities and forest contribute the highest annual loads for phosphorus (6.91 × 104 kgP/yr) and nitrogen (2.50 × 105 kgN/yr), respectively. Therefore, best management practices (BMPs) are suggested to be applied to reduce pollution level in the area.

Ensemble Learning with Decision Tree for Remote Sensing Classification

In recent years, a number of works proposing the combination of multiple classifiers to produce a single classification have been reported in remote sensing literature. The resulting classifier, referred to as an ensemble classifier, is generally found to be more accurate than any of the individual classifiers making up the ensemble. As accuracy is the primary concern, much of the research in the field of land cover classification is focused on improving classification accuracy. This study compares the performance of four ensemble approaches (boosting, bagging, DECORATE and random subspace) with a univariate decision tree as base classifier. Two training datasets, one without ant noise and other with 20 percent noise was used to judge the performance of different ensemble approaches. Results with noise free data set suggest an improvement of about 4% in classification accuracy with all ensemble approaches in comparison to the results provided by univariate decision tree classifier. Highest classification accuracy of 87.43% was achieved by boosted decision tree. A comparison of results with noisy data set suggests that bagging, DECORATE and random subspace approaches works well with this data whereas the performance of boosted decision tree degrades and a classification accuracy of 79.7% is achieved which is even lower than that is achieved (i.e. 80.02%) by using unboosted decision tree classifier.

Mobile to Server Face Recognition: A System Overview

This paper presents a system overview of Mobile to Server Face Recognition, which is a face recognition application developed specifically for mobile phones. Images taken from mobile phone cameras lack of quality due to the low resolution of the cameras. Thus, a prototype is developed to experiment the chosen method. However, this paper shows a result of system backbone without the face recognition functionality. The result demonstrated in this paper indicates that the interaction between mobile phones and server is successfully working. The result shown before the database is completely ready. The system testing is currently going on using real images and a mock-up database to test the functionality of the face recognition algorithm used in this system. An overview of the whole system including screenshots and system flow-chart are presented in this paper. This paper also presents the inspiration or motivation and the justification in developing this system.

Adaptive Radio Resource Allocation for Multiple Traffic OFDMA Broadband Wireless Access System

In this paper, an adaptive radio resource allocation (RRA) algorithm applying to multiple traffic OFDMA system is proposed, which distributes sub-carrier and loading bits among users according to their different QoS requirements and traffic class. By classifying and prioritizing the users based on their traffic characteristic and ensuring resource for higher priority users, the scheme decreases tremendously the outage probability of the users requiring a real time transmission without impact on the spectrum efficiency of system, as well as the outage probability of data users is not increased compared with the RRA methods published.

An Efficient Approach to Mining Frequent Itemsets on Data Streams

The increasing importance of data stream arising in a wide range of advanced applications has led to the extensive study of mining frequent patterns. Mining data streams poses many new challenges amongst which are the one-scan nature, the unbounded memory requirement and the high arrival rate of data streams. In this paper, we propose a new approach for mining itemsets on data stream. Our approach SFIDS has been developed based on FIDS algorithm. The main attempts were to keep some advantages of the previous approach and resolve some of its drawbacks, and consequently to improve run time and memory consumption. Our approach has the following advantages: using a data structure similar to lattice for keeping frequent itemsets, separating regions from each other with deleting common nodes that results in a decrease in search space, memory consumption and run time; and Finally, considering CPU constraint, with increasing arrival rate of data that result in overloading system, SFIDS automatically detect this situation and discard some of unprocessing data. We guarantee that error of results is bounded to user pre-specified threshold, based on a probability technique. Final results show that SFIDS algorithm could attain about 50% run time improvement than FIDS approach.

The Use of Dynamically Optimised High Frequency Moving Average Strategies for Intraday Trading

This paper is motivated by the aspect of uncertainty in financial decision making, and how artificial intelligence and soft computing, with its uncertainty reducing aspects can be used for algorithmic trading applications that trade in high frequency. This paper presents an optimized high frequency trading system that has been combined with various moving averages to produce a hybrid system that outperforms trading systems that rely solely on moving averages. The paper optimizes an adaptive neuro-fuzzy inference system that takes both the price and its moving average as input, learns to predict price movements from training data consisting of intraday data, dynamically switches between the best performing moving averages, and performs decision making of when to buy or sell a certain currency in high frequency.

Mining Image Features in an Automatic Two-Dimensional Shape Recognition System

The number of features required to represent an image can be very huge. Using all available features to recognize objects can suffer from curse dimensionality. Feature selection and extraction is the pre-processing step of image mining. Main issues in analyzing images is the effective identification of features and another one is extracting them. The mining problem that has been focused is the grouping of features for different shapes. Experiments have been conducted by using shape outline as the features. Shape outline readings are put through normalization and dimensionality reduction process using an eigenvector based method to produce a new set of readings. After this pre-processing step data will be grouped through their shapes. Through statistical analysis, these readings together with peak measures a robust classification and recognition process is achieved. Tests showed that the suggested methods are able to automatically recognize objects through their shapes. Finally, experiments also demonstrate the system invariance to rotation, translation, scale, reflection and to a small degree of distortion.

A Design and Implementation Model for Web Caching Using Server “URL Rewriting“

In order to make surfing the internet faster, and to save redundant processing load with each request for the same web page, many caching techniques have been developed to reduce latency of retrieving data on World Wide Web. In this paper we will give a quick overview of existing web caching techniques used for dynamic web pages then we will introduce a design and implementation model that take advantage of “URL Rewriting" feature in some popular web servers, e.g. Apache, to provide an effective approach of caching dynamic web pages.

Equilibrium Modeling of Cu and Ni Removal from Aqueous Solutions: Influence of Salinity

This study deals with evaluation of influence of salinity (NaCl) onto equilibrium of Cu and Ni removal from aqueous solutions by natural sorbent – zeolite. Equilibrium data were obtained by batch experiments. The salinity of the aqueous solution was influenced by dissolving NaCl in distilled water. It was studied in the range of NaCl concentrations from 1 g.l-1 to 100g.l-1. For Cu sorption there is a significant influence of salinity. The maximum capacity of zeolite for Cu was decreasing with growing concentration of NaCl. For Ni sorption there is not so significant influence of salinity as for Cu. The maximum capacity of zeolite for Ni was slightly decreasing with growing concentration of NaCl.