Portable Continuous Aerosol Concentrator for the Determination of NO2 in the Air

The paper deals with the development of portable aerosol concentrator and its application for the determination of nitrites and nitrates. The device enables the continuous trapping of pollutants in the air. An extensive literature search has been elaborated which aims at the development of samplers and the possibilities of their application in the continuous determination of volatile organic compounds. The practical part of the paper is focused on the development of the portable aerosol concentrator. The device using the Aerosol Enrichment Unit has been experimentally verified and subsequently realized. It operates on the principle of equilibrium accumulation of pollutants from the gaseous phase using absorption liquid polydisperse aerosol. The device has been applied for monitoring nitrites and nitrates in the air. The chemiluminescence detector was used for detection; the achieved detection limit for nitrites was 28 ng/m3 and for nitrates 78 ng/m3.

Efficient Method for ECG Compression Using Two Dimensional Multiwavelet Transform

In this paper we introduce an effective ECG compression algorithm based on two dimensional multiwavelet transform. Multiwavelets offer simultaneous orthogonality, symmetry and short support, which is not possible with scalar two-channel wavelet systems. These features are known to be important in signal processing. Thus multiwavelet offers the possibility of superior performance for image processing applications. The SPIHT algorithm has achieved notable success in still image coding. We suggested applying SPIHT algorithm to 2-D multiwavelet transform of2-D arranged ECG signals. Experiments on selected records of ECG from MIT-BIH arrhythmia database revealed that the proposed algorithm is significantly more efficient in comparison with previously proposed ECG compression schemes.

Microscopic Emission and Fuel Consumption Modeling for Light-duty Vehicles Using Portable Emission Measurement System Data

Microscopic emission and fuel consumption models have been widely recognized as an effective method to quantify real traffic emission and energy consumption when they are applied with microscopic traffic simulation models. This paper presents a framework for developing the Microscopic Emission (HC, CO, NOx, and CO2) and Fuel consumption (MEF) models for light-duty vehicles. The variable of composite acceleration is introduced into the MEF model with the purpose of capturing the effects of historical accelerations interacting with current speed on emission and fuel consumption. The MEF model is calibrated by multivariate least-squares method for two types of light-duty vehicle using on-board data collected in Beijing, China by a Portable Emission Measurement System (PEMS). The instantaneous validation results shows the MEF model performs better with lower Mean Absolute Percentage Error (MAPE) compared to other two models. Moreover, the aggregate validation results tells the MEF model produces reasonable estimations compared to actual measurements with prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx emissions and fuel consumption, respectively.

An Adaptive Hand-Talking System for the Hearing Impaired

An adaptive Chinese hand-talking system is presented in this paper. By analyzing the 3 data collecting strategies for new users, the adaptation framework including supervised and unsupervised adaptation methods is proposed. For supervised adaptation, affinity propagation (AP) is used to extract exemplar subsets, and enhanced maximum a posteriori / vector field smoothing (eMAP/VFS) is proposed to pool the adaptation data among different models. For unsupervised adaptation, polynomial segment models (PSMs) are used to help hidden Markov models (HMMs) to accurately label the unlabeled data, then the "labeled" data together with signerindependent models are inputted to MAP algorithm to generate signer-adapted models. Experimental results show that the proposed framework can execute both supervised adaptation with small amount of labeled data and unsupervised adaptation with large amount of unlabeled data to tailor the original models, and both achieve improvements on the performance of recognition rate.

Contribution to the Study of Thermal Conductivity of Porous Silicon Used In Thermal Sensors

The porous silicon (PS), formed from the anodization of a p+ type substrate silicon, consists of a network organized in a pseudo-column as structure of multiple side ramifications. Structural micro-topology can be interpreted as the fraction of the interconnected solid phase contributing to thermal transport. The reduction of dimensions of silicon of each nanocristallite during the oxidation induced a reduction in thermal conductivity. Integration of thermal sensors in the Microsystems silicon requires an effective insulation of the sensor element. Indeed, the low thermal conductivity of PS consists in a very promising way in the fabrication of integrated thermal Microsystems.In this work we are interesting in the measurements of thermal conductivity (on the surface and in depth) of PS by the micro-Raman spectroscopy. The thermal conductivity is studied according to the parameters of anodization (initial doping and current density. We also, determine porosity of samples by spectroellipsometry.

Quality-Controlled Compression Method using Wavelet Transform for Electrocardiogram Signals

This paper presents a new Quality-Controlled, wavelet based, compression method for electrocardiogram (ECG) signals. Initially, an ECG signal is decomposed using the wavelet transform. Then, the resulting coefficients are iteratively thresholded to guarantee that a predefined goal percent root mean square difference (GPRD) is matched within tolerable boundaries. The quantization strategy of extracted non-zero wavelet coefficients (NZWC), according to the combination of RLE, HUFFMAN and arithmetic encoding of the NZWC and a resulting look up table, allow the accomplishment of high compression ratios with good quality reconstructed signals.

Feature Selection Approaches with Missing Values Handling for Data Mining - A Case Study of Heart Failure Dataset

In this paper, we investigated the characteristic of a clinical dataseton the feature selection and classification measurements which deal with missing values problem.And also posed the appropriated techniques to achieve the aim of the activity; in this research aims to find features that have high effect to mortality and mortality time frame. We quantify the complexity of a clinical dataset. According to the complexity of the dataset, we proposed the data mining processto cope their complexity; missing values, high dimensionality, and the prediction problem by using the methods of missing value replacement, feature selection, and classification.The experimental results will extend to develop the prediction model for cardiology.

Heat Transfer Enhancement Studies in a Circular Tube Fitted with Right-Left Helical Inserts with Spacer

Experimental investigation of heat transfer and friction factor characteristics of circular tube fitted with 300 right-left helical screw inserts with 100 mm spacer of different twist ratio has been presented for laminar and turbulent flow.. The experimental data obtained were compared with those obtained from plain tube published data. The heat transfer coefficient enhancement for 300 RL inserts with 100 mm spacer is quite comparable with for 300 R-L inserts. Performance evaluation analysis has been made and found that the performance ratio increases with increasing Reynolds number and decreasing twist ration with the maximum for the twist ratio 2.93. Also, the performance ratio of more than one indicates that the type of twist inserts can be used effectively for heat transfer augmentation.

Temperature Effect on the Organic Solar Cells Parameters

In this work, the influence of temperature on the different parameters of solar cells based on organic semiconductors are studied. The short circuit current Isc increases so monotonous with temperature and then saturates to a maximum value before decreasing at high temperatures. The open circuit voltage Vco decreases linearly with temperature. The fill factor FF and efficiency, which are directly related with Isc and Vco follow the variations of the letters. The phenomena are explained by the behaviour of the mobility which is a temperature activated process.

Evolutionary Feature Selection for Text Documents using the SVM

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step, the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of feature selection methods to reduce the dimensionality of the document-representation vector. In this paper, we present three feature selection methods: Information Gain, Support Vector Machine feature selection called (SVM_FS) and Genetic Algorithm with SVM (called GA_SVM). We show that the best results were obtained with GA_SVM method for a relatively small dimension of the feature vector.

Method of Moments Applied to a Cuboidal Cavity Resonator: Effect of Gravitational Field Produced by a Black Hole

This paper deals with the formulation of Maxwell-s equations in a cavity resonator in the presence of the gravitational field produced by a blackhole. The metric of space-time due to the blackhole is the Schwarzchild metric. Conventionally, this is expressed in spherical polar coordinates. In order to adapt this metric to our problem, we have considered this metric in a small region close to the blackhole and expressed this metric in a cartesian system locally.

Evaluation of the Energy Consumption per Bit inBENES Optical Packet Switch

We evaluate the average energy consumption per bit in Optical Packet Switches equipped with BENES switching fabric realized in Semiconductor Optical Amplifier (SOA) technology. We also study the impact that the Amplifier Spontaneous Emission (ASE) noise generated by a transmission system has on the power consumption of the BENES switches due to the gain saturation of the SOAs used to realize the switching fabric. As a matter of example for 32×32 switches supporting 64 wavelengths and offered traffic equal to 0,8, the average energy consumption per bit is 2, 34 · 10-1 nJ/bit and increases if ASE noise introduced by the transmission systems is increased.

Development of Knowledge Portal using Open Source Tools: A Case Study of FIIT, UNISEL

Knowledge sharing culture contributes to a positive working environment. Currently, there is no platform for the Faculty of Industrial Information Technology (FIIT), Unisel academic staff to share knowledge among them. As it is done manually, the sharing process is through common meeting or by any offline discussions. There is no repository for future retrieval. However, with open source solution the development of knowledge based application may reduce the cost tremendously. In this paper we discuss about the domain on which this knowledge portal is being developed and also the deployment of open source tools such as JOOMLA, PHP programming language and MySQL. This knowledge portal is evidence that open source tools also reliable in developing knowledge based portal. These recommendations will be useful to the open source community to produce more open source products in future.

Torque Based Selection of ANN for Fault Diagnosis of Wound Rotor Asynchronous Motor-Converter Association

In this paper, an automatic system of diagnosis was developed to detect and locate in real time the defects of the wound rotor asynchronous machine associated to electronic converter. For this purpose, we have treated the signals of the measured parameters (current and speed) to use them firstly, as indicating variables of the machine defects under study and, secondly, as inputs to the Artificial Neuron Network (ANN) for their classification in order to detect the defect type in progress. Once a defect is detected, the interpretation system of information will give the type of the defect and its place of appearance.

Effective Defect Prevention Approach in Software Process for Achieving Better Quality Levels

Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort time is required for 99% - 99.75% of defect elimination. A comparison of the end results for the five selected projects between the companies is also brought about throwing light on the possibility of a particular company to position itself with an appropriate complementary ratio of inspection testing.

Hydrogels Based on Carrageenan Extracted from Kappaphycus alvarezii

Preparation of hydrogel based on carrageenan extracted from Kappaphycus alvarezii was conducted with film immersion in glutaraldehyde solution (GA 4%w/w) for 2min and then followed by thermal curing at 110°C for 25min. The method of carrageenan recovery strongly determines the properties of crosslinked carrageenan. Hydrogel obtained from alkali treated carrageenan showed higher swelling ability compared to hydrogel from nonalkali treated carrageenan. Hydrogel from alkali treated showed the ability of sensitive to pH media.

A Frugal Bidding Procedure for Replicating WWW Content

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

A Study of Calcination and Carbonation of Cockle Shell

Calcium oxide (CaO) as carbon dioxide (CO2) adsorbent at the elevated temperature has been very well-received thus far. The CaO can be synthesized from natural calcium carbonate (CaCO3) sources through the reversible calcination-carbonation process. In the study, cockle shell has been selected as CaO precursors. The objectives of the study are to investigate the performance of calcination and carbonation with respect to different temperature, heating rate, particle size and the duration time. Overall, better performance is shown at the calcination temperature of 850oC for 40 minutes, heating rate of 20oC/min, particle size of < 0.125mm and the carbonation temperature is at 650oC. The synthesized materials have been characterized by nitrogen physisorption and surface morphology analysis. The effectiveness of the synthesized cockle shell in capturing CO2 (0.72 kg CO2/kg adsorbent) which is comparable to the commercialized adsorbent (0.60 kg CO2/kg adsorbent) makes them as the most promising materials for CO2 capture.

Signal Reconstruction Using Cepstrum of Higher Order Statistics

This paper presents an algorithm for reconstructing phase and magnitude responses of the impulse response when only the output data are available. The system is driven by a zero-mean independent identically distributed (i.i.d) non-Gaussian sequence that is not observed. The additive noise is assumed to be Gaussian. This is an important and essential problem in many practical applications of various science and engineering areas such as biomedical, seismic, and speech processing signals. The method is based on evaluating the bicepstrum of the third-order statistics of the observed output data. Simulations results are presented that demonstrate the performance of this method.

Testing Loaded Programs Using Fault Injection Technique

Fault tolerance is critical in many of today's large computer systems. This paper focuses on improving fault tolerance through testing. Moreover, it concentrates on the memory faults: how to access the editable part of a process memory space and how this part is affected. A special Software Fault Injection Technique (SFIT) is proposed for this purpose. This is done by sequentially scanning the memory of the target process, and trying to edit maximum number of bytes inside that memory. The technique was implemented and tested on a group of programs in software packages such as jet-audio, Notepad, Microsoft Word, Microsoft Excel, and Microsoft Outlook. The results from the test sample process indicate that the size of the scanned area depends on several factors. These factors are: process size, process type, and virtual memory size of the machine under test. The results show that increasing the process size will increase the scanned memory space. They also show that input-output processes have more scanned area size than other processes. Increasing the virtual memory size will also affect the size of the scanned area but to a certain limit.