Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Landfill Gas Monitoring at Borehole Wells using an Autonomous Environmental Monitoring System

An autonomous environmental monitoring system (Smart Landfill) has been constructed for the quantitative measurement of the components of landfill gas found at borehole wells at the perimeter of landfill sites. The main components of landfill gas are the greenhouse gases, methane and carbon dioxide and have been monitored in the range 0-5 % volume. This monitoring system has not only been tested in the laboratory but has been deployed in multiple field trials and the data collected successfully compared with on-site monitors. This success shows the potential of this system for application in environments where reliable gas monitoring is crucial.

A Methodology for Quality Problems Diagnosis in SMEs

This article proposes a new methodology to be used by SMEs (Small and Medium enterprises) to characterize their performance in quality, highlighting weaknesses and area for improvement. The methodology aims to identify the principal causes of quality problems and help to prioritize improvement initiatives. This is a self-assessment methodology that intends to be easy to implement by companies with low maturity level in quality. The methodology is organized in six different steps which includes gathering information about predetermined processes and subprocesses of quality management, defined based on the well-known Juran-s trilogy for quality management (Quality planning, quality control and quality improvement) and, predetermined results categories, defined based on quality concept. A set of tools for data collecting and analysis, such as interviews, flowcharts, process analysis diagrams and Failure Mode and effects Analysis (FMEA) are used. The article also presents the conclusions obtained in the application of the methodology in two cases studies.

Multiclass Support Vector Machines for Environmental Sounds Classification Using log-Gabor Filters

In this paper we propose a robust environmental sound classification approach, based on spectrograms features driven from log-Gabor filters. This approach includes two methods. In the first methods, the spectrograms are passed through an appropriate log-Gabor filter banks and the outputs are averaged and underwent an optimal feature selection procedure based on a mutual information criteria. The second method uses the same steps but applied only to three patches extracted from each spectrogram. To investigate the accuracy of the proposed methods, we conduct experiments using a large database containing 10 environmental sound classes. The classification results based on Multiclass Support Vector Machines show that the second method is the most efficient with an average classification accuracy of 89.62 %.

Web-GIS based Outdoor Education Program for Junior High Schools

This study, focusing on the importance of encouraging outdoor activities for children, aims to propose and implement a Web-GIS based outdoor education program for junior high schools, which will then be evaluated by users. Specifically, for the purpose of improved outdoor activities in the junior high school education, the outdoor education program, with chiefly using the Web-GIS that provides a good information provision and sharing tool, is proposed and implemented before being evaluated by users. The conclusion of this study can be summarized in the following two points. (1) A five -step outdoor education program based on Web-GIS was proposed for a “second school" at junior high schools that was then implemented before being evaluated by teachers as users. (2) Based on the results of evaluation by teachers, it was clear that the general operation of Web-GIS based outdoor education program with them only is difficult due to their lack of knowledge regarding Web-GIS and that support staff who can effectively utilize Web-GIS are essential.

Direct Block Backward Differentiation Formulas for Solving Second Order Ordinary Differential Equations

In this paper, a direct method based on variable step size Block Backward Differentiation Formula which is referred as BBDF2 for solving second order Ordinary Differential Equations (ODEs) is developed. The advantages of the BBDF2 method over the corresponding sequential variable step variable order Backward Differentiation Formula (BDFVS) when used to solve the same problem as a first order system are pointed out. Numerical results are given to validate the method.

Effect of Temperature and Time on Dilute Acid Pretreatment of Corn Cobs

Lignocellulosic materials are new targeted source to produce second generation biofuels like biobutanol. However, this process is significantly resisted by the native structure of biomass. Therefore, pretreatment process is always essential to remove hemicelluloses and lignin prior to the enzymatic hydrolysis. The goals of pretreatment are removing hemicelluloses and lignin, increasing biomass porosity, and increasing the enzyme accessibility. The main goal of this research is to study the important variables such as pretreatment temperature and time, which can give the highest total sugar yield in pretreatment step by using dilute phosphoric acid. After pretreatment, the highest total sugar yield of 13.61 g/L was obtained under an optimal condition at 140°C for 10 min of pretreatment time by using 1.75% (w/w) H3PO4 and at 15:1 liquid to solid ratio. The total sugar yield of two-stage process (pretreatment+enzymatic hydrolysis) of 27.38 g/L was obtained.

Using Suffix Tree Document Representation in Hierarchical Agglomerative Clustering

In text categorization problem the most used method for documents representation is based on words frequency vectors called VSM (Vector Space Model). This representation is based only on words from documents and in this case loses any “word context" information found in the document. In this article we make a comparison between the classical method of document representation and a method called Suffix Tree Document Model (STDM) that is based on representing documents in the Suffix Tree format. For the STDM model we proposed a new approach for documents representation and a new formula for computing the similarity between two documents. Thus we propose to build the suffix tree only for any two documents at a time. This approach is faster, it has lower memory consumption and use entire document representation without using methods for disposing nodes. Also for this method is proposed a formula for computing the similarity between documents, which improves substantially the clustering quality. This representation method was validated using HAC - Hierarchical Agglomerative Clustering. In this context we experiment also the stemming influence in the document preprocessing step and highlight the difference between similarity or dissimilarity measures to find “closer" documents.

A Serializability Condition for Multi-step Transactions Accessing Ordered Data

In mobile environments, unspecified numbers of transactions arrive in continuous streams. To prove correctness of their concurrent execution a method of modelling an infinite number of transactions is needed. Standard database techniques model fixed finite schedules of transactions. Lately, techniques based on temporal logic have been proposed as suitable for modelling infinite schedules. The drawback of these techniques is that proving the basic serializability correctness condition is impractical, as encoding (the absence of) conflict cyclicity within large sets of transactions results in prohibitively large temporal logic formulae. In this paper, we show that, under certain common assumptions on the graph structure of data items accessed by the transactions, conflict cyclicity need only be checked within all possible pairs of transactions. This results in formulae of considerably reduced size in any temporal-logic-based approach to proving serializability, and scales to arbitrary numbers of transactions.

Shape Restoration of the Left Ventricle

This paper describes an automatic algorithm to restore the shape of three-dimensional (3D) left ventricle (LV) models created from magnetic resonance imaging (MRI) data using a geometry-driven optimization approach. Our basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration. A geometrical measure known as the Minimum Principle Curvature (κ2) is used to assess the smoothness of the LV. This measure is used to construct the objective function of a two-step optimization process. The objective of the optimization is to achieve a smooth epicardial shape by iterative in-plane translation of the MRI slices. Quantitatively, this yields a minimum sum in terms of the magnitude of κ 2, when κ2 is negative. A limited memory quasi-Newton algorithm, L-BFGS-B, is used to solve the optimization problem. We tested our algorithm on an in vitro theoretical LV model and 10 in vivo patient-specific models which contain significant motion artifacts. The results show that our method is able to automatically restore the shape of LV models back to smoothness without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings.

Satellite Data Classification Accuracy Assessment Based from Reference Dataset

In order to develop forest management strategies in tropical forest in Malaysia, surveying the forest resources and monitoring the forest area affected by logging activities is essential. There are tremendous effort has been done in classification of land cover related to forest resource management in this country as it is a priority in all aspects of forest mapping using remote sensing and related technology such as GIS. In fact classification process is a compulsory step in any remote sensing research. Therefore, the main objective of this paper is to assess classification accuracy of classified forest map on Landsat TM data from difference number of reference data (200 and 388 reference data). This comparison was made through observation (200 reference data), and interpretation and observation approaches (388 reference data). Five land cover classes namely primary forest, logged over forest, water bodies, bare land and agricultural crop/mixed horticultural can be identified by the differences in spectral wavelength. Result showed that an overall accuracy from 200 reference data was 83.5 % (kappa value 0.7502459; kappa variance 0.002871), which was considered acceptable or good for optical data. However, when 200 reference data was increased to 388 in the confusion matrix, the accuracy slightly improved from 83.5% to 89.17%, with Kappa statistic increased from 0.7502459 to 0.8026135, respectively. The accuracy in this classification suggested that this strategy for the selection of training area, interpretation approaches and number of reference data used were importance to perform better classification result.

Comparison of Finite Difference Schemes for Water Flow in Unsaturated Soils

Flow movement in unsaturated soil can be expressed by a partial differential equation, named Richards equation. The objective of this study is the finding of an appropriate implicit numerical solution for head based Richards equation. Some of the well known finite difference schemes (fully implicit, Crank Nicolson and Runge-Kutta) have been utilized in this study. In addition, the effects of different approximations of moisture capacity function, convergence criteria and time stepping methods were evaluated. Two different infiltration problems were solved to investigate the performance of different schemes. These problems include of vertical water flow in a wet and very dry soils. The numerical solutions of two problems were compared using four evaluation criteria and the results of comparisons showed that fully implicit scheme is better than the other schemes. In addition, utilizing of standard chord slope method for approximation of moisture capacity function, automatic time stepping method and difference between two successive iterations as convergence criterion in the fully implicit scheme can lead to better and more reliable results for simulation of fluid movement in different unsaturated soils.

Sovereign Credit Risk Measures

This paper focuses on sovereign credit risk meaning a hot topic related to the current Eurozone crisis. In the light of the recent financial crisis, market perception of the creditworthiness of individual sovereigns has changed significantly. Before the outbreak of the financial crisis, market participants did not differentiate between credit risk born by individual states despite different levels of public indebtedness. In the proceeding of the financial crisis, the market participants became aware of the worsening fiscal situation in the European countries and started to discriminate among government issuers. Concerns about the increasing sovereign risk were reflected in surging sovereign risk premium. The main of this paper is to shed light on the characteristics of the sovereign risk with the special attention paid to the mutual relation between credit spread and the CDS premium as the main measures of the sovereign risk premium.

A Forward Automatic Censored Cell-Averaging Detector for Multiple Target Situations in Log-Normal Clutter

A challenging problem in radar signal processing is to achieve reliable target detection in the presence of interferences. In this paper, we propose a novel algorithm for automatic censoring of radar interfering targets in log-normal clutter. The proposed algorithm, termed the forward automatic censored cell averaging detector (F-ACCAD), consists of two steps: removing the corrupted reference cells (censoring) and the actual detection. Both steps are performed dynamically by using a suitable set of ranked cells to estimate the unknown background level and set the adaptive thresholds accordingly. The F-ACCAD algorithm does not require any prior information about the clutter parameters nor does it require the number of interfering targets. The effectiveness of the F-ACCAD algorithm is assessed by computing, using Monte Carlo simulations, the probability of censoring and the probability of detection in different background environments.

An Idea About How to Teach OO-Programming to Students

Object-oriented programming is a wonderful way to make programming of huge real life tasks much easier than by using procedural languages. In order to teach those ideas to students, it is important to find a good task that shows the advantages of OOprogramming very naturally. This paper gives an example, the game Battleship, which seems to work excellent for teaching the OO ideas (using Java, [1], [2], [3], [4]). A three-step task is presented for how to teach OO-programming using just one example suitable to convey many of the OO ideas. Observations are given at the end and conclusions about how the whole teaching course worked out.

Peakwise Smoothing of Data Models using Wavelets

Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.

Morphometric Analysis of Tor tambroides by Stepwise Discriminant and Neural Network Analysis

The population structure of the Tor tambroides was investigated with morphometric data (i.e. morphormetric measurement and truss measurement). A morphometric analysis was conducted to compare specimens from three waterfalls: Sunanta, Nan Chong Fa and Wang Muang waterfalls at Khao Nan National Park, Nakhon Si Thammarat, Southern Thailand. The results of stepwise discriminant analysis on seven morphometric variables and 21 truss variables per individual were the same as from a neural network. Fish from three waterfalls were separated into three groups based on their morphometric measurements. The morphometric data shows that the nerual network model performed better than the stepwise discriminant analysis.

Particle Simulation of Rarefied Gas Flows witha Superimposed Wall Surface Temperature Gradient in Microgeometries

Rarefied gas flows are often occurred in micro electro mechanical systems and classical CFD could not precisely anticipate the flow and thermal behavior due to the high Knudsen number. Therefore, the heat transfer and the fluid dynamics characteristics of rarefied gas flows in both a two-dimensional simple microchannel and geometry similar to single Knudsen compressor have been investigated with a goal of increasing performance of a actual Knudsen compressor by using a particle simulation method. Thermal transpiration and thermal creep, which are rarefied gas dynamic phenomena, that cause movement of the flow from less to higher temperature is generated by using two different longitude temperature gradients (Linear, Step) along the walls of the flow microchannel. In this study the influence of amount of temperature gradient and governing pressure in various Knudsen numbers and length-to-height ratios have been examined.

Low Resolution Face Recognition Using Mixture of Experts

Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 × 48 ORL images to 12 × 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set.

Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification

In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.