A Methodological Approach for Detecting Burst Noise in the Time Domain

The burst noise is a kind of noises that are destructive and frequently found in semiconductor devices and ICs, yet detecting and removing the noise has proved challenging for IC designers or users. According to the properties of burst noise, a methodological approach is presented (proposed) in the paper, by which the burst noise can be analysed and detected in time domain. In this paper, principles and properties of burst noise are expounded first, Afterwards, feasibility (viable) of burst noise detection by means of wavelet transform in the time domain is corroborated in the paper, and the multi-resolution characters of Gaussian noise, burst noise and blurred burst noise are discussed in details by computer emulation. Furthermore, the practical method to decide parameters of wavelet transform is acquired through a great deal of experiment and data statistics. The methodology may yield an expectation in a wide variety of applications.

Capacitive ECG Measurement by Conductive Fabric Tape

Capacitive electrocardiogram (ECG) measurement is an attractive approach for long-term health monitoring. However, there is little literature available on its implementation, especially for multichannel system in standard ECG leads. This paper begins from the design criteria for capacitive ECG measurement and presents a multichannel limb-lead capacitive ECG system with conductive fabric tapes pasted on a double layer PCB as the capacitive sensors. The proposed prototype system incorporates a capacitive driven-body (CDB) circuit to reduce the common-mode power-line interference (PLI). The presented prototype system has been verified to be stable by theoretic analysis and practical long-term experiments. The signal quality is competitive to that acquired by commercial ECG machines. The feasible size and distance of capacitive sensor have also been evaluated by a series of tests. From the test results, it is suggested to be greater than 60 cm2 in sensor size and be smaller than 1.5 mm in distance for capacitive ECG measurement.

Repairing and Strengthening Earthquake Damaged RC Beams with Composites

The dominant judgment for earthquake damaged reinforced concrete (RC) structures is to rebuild them with the new ones. Consequently, this paper estimates if there is chance to repair earthquake RC beams and obtain economical contribution to modern day society. Therefore, the totally damaged (damaged in shear under cyclic load) reinforced concrete (RC) beams repaired and strengthened by externally bonded carbon fibre reinforced polymer (CFRP) strips in this study. Four specimens, apart from the reference beam, were separated into two distinct groups. Two experimental beams in the first group primarily tested up to failure then appropriately repaired and strengthened with CFRP strips. Two undamaged specimens from the second group were not repaired but strengthened by the identical strengthening scheme as the first group for comparison. This study studies whether earthquake damaged RC beams that have been repaired and strengthened will validate similar strength and behavior to equally strengthened, undamaged RC beams. Accordingly, a strength correspondence according to strengthened specimens was acquired for the repaired and strengthened specimens. Test results confirmed that repair and strengthening, which were estimated in the experimental program, were effective for the specimens with the cracking patterns considered in the experimental program. 

Implementation of the Personal Emergency Response System

The aged are faced with increasing risk for falls. The aged have the easily fragile bones than others. When falls have occurred, it is important to detect this emergency state because such events often lead to more serious illness or even death. A implementation of PDA system, for detection of emergency situation, was developed using 3-axis accelerometer in this paper as follows. The signals were acquired from the 3-axis accelerometer, and then transmitted to the PDA through Bluetooth module. This system can classify the human activity, and also detect the emergency state like falls. When the fall occurs, the system generates the alarm on the PDA. If a subject does not respond to the alarm, the system determines whether the current situation is an emergency state or not, and then sends some information to the emergency center in the case of urgent situation. Three different studies were conducted on 12 experimental subjects, with results indicating a good accuracy. The first study was performed to detect the posture change of human daily activity. The second study was performed to detect the correct direction of fall. The third study was conducted to check the classification of the daily physical activity. Each test was lasted at least 1 min. in third study. The output of acceleration signal was compared and evaluated by changing a various posture after attaching a 3-axis accelerometer module on the chest. The newly developed system has some important features such as portability, convenience and low cost. One of the main advantages of this system is that it is available at home healthcare environment. Another important feature lies in low cost to manufacture device. The implemented system can detect the fall accurately, so will be widely used in emergency situation.

Consistent Modeling of Functional Dependencies along with World Knowledge

In this paper we propose a method for vision systems to consistently represent functional dependencies between different visual routines along with relational short- and long-term knowledge about the world. Here the visual routines are bound to visual properties of objects stored in the memory of the system. Furthermore, the functional dependencies between the visual routines are seen as a graph also belonging to the object-s structure. This graph is parsed in the course of acquiring a visual property of an object to automatically resolve the dependencies of the bound visual routines. Using this representation, the system is able to dynamically rearrange the processing order while keeping its functionality. Additionally, the system is able to estimate the overall computational costs of a certain action. We will also show that the system can efficiently use that structure to incorporate already acquired knowledge and thus reduce the computational demand.

Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images

Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.

Biosynthesis and In vitro Studies of Silver Bionanoparticles Synthesized from Aspergillusspecies and its Antimicrobial Activity against Multi Drug Resistant Clinical Isolates

Antimicrobial resistant is becoming a major factor in virtually all hospital acquired infection may soon untreatable is a serious public health problem. These concerns have led to major research effort to discover alternative strategies for the treatment of bacterial infection. Nanobiotehnology is an upcoming and fast developing field with potential application for human welfare. An important area of nanotechnology for development of reliable and environmental friendly process for synthesis of nanoscale particles through biological systems In the present studies are reported on the use of fungal strain Aspergillus species for the extracellular synthesis of bionanoparticles from 1 mM silver nitrate (AgNO3) solution. The report would be focused on the synthesis of metallic bionanoparticles of silver using a reduction of aqueous Ag+ ion with the culture supernatants of Microorganisms. The bio-reduction of the Ag+ ions in the solution would be monitored in the aqueous component and the spectrum of the solution would measure through UV-visible spectrophotometer The bionanoscale particles were further characterized by Atomic Force Microscopy (AFM), Fourier Transform Infrared Spectroscopy (FTIR) and Thin layer chromatography. The synthesized bionanoscale particle showed a maximum absorption at 385 nm in the visible region. Atomic Force Microscopy investigation of silver bionanoparticles identified that they ranged in the size of 250 nm - 680 nm; the work analyzed the antimicrobial efficacy of the silver bionanoparticles against various multi drug resistant clinical isolates. The present Study would be emphasizing on the applicability to synthesize the metallic nanostructures and to understand the biochemical and molecular mechanism of nanoparticles formation by the cell filtrate in order to achieve better control over size and polydispersity of the nanoparticles. This would help to develop nanomedicine against various multi drug resistant human pathogens.

A Novel Machining Signal Filtering Technique: Z-notch Filter

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Metal Streak Analysis with different Acquisition Settings in Postoperative Spine Imaging: A Phantom Study

CT assessment of postoperative spine is challenging in the presence of metal streak artifacts that could deteriorate the quality of CT images. In this paper, we studied the influence of different acquisition parameters on the magnitude of metal streaking. A water-bath phantom was constructed with metal insertion similar with postoperative spine assessment. The phantom was scanned with different acquisition settings and acquired data were reconstructed using various reconstruction settings. Standardized ROIs were defined within streaking region for image analysis. The result shows increased kVp and mAs enhanced SNR values by reducing image noise. Sharper kernel enhanced image quality compared to smooth kernel, but produced more noise in the images with higher CT fluctuation. The noise between both kernels were significantly different (P

Performance Evaluation of an Amperometric Biosensor using a Simple Microcontroller based Data Acquisition System

In this paper we have proposed a methodology to develop an amperometric biosensor for the analysis of glucose concentration using a simple microcontroller based data acquisition system. The work involves the development of Detachable Membrane Unit (enzyme based biomembrane) with immobilized glucose oxidase on the membrane and interfacing the same to the signal conditioning system. The current generated by the biosensor for different glucose concentrations was signal conditioned, then acquired and computed by a simple AT89C51-microcontroller. The optimum operating parameters for the better performance were found and reported. The detailed performance evaluation of the biosensor has been carried out. The proposed microcontroller based biosensor system has the sensitivity of 0.04V/g/dl, with a resolution of 50mg/dl. It has exhibited very good inter day stability observed up to 30 days. Comparing to the reference method such as HPLC, the accuracy of the proposed biosensor system is well within ± 1.5%. The system can be used for real time analysis of glucose concentration in the field such as, food and fermentation and clinical (In-Vitro) applications.

Nonlinear Dynamical Characterization of Heart Rate Variability Time Series of Meditation

Many recent electrophysiological studies have revealed the importance of investigating meditation state in order to achieve an increased understanding of autonomous control of cardiovascular functions. In this paper, we characterize heart rate variability (HRV) time series acquired during meditation using nonlinear dynamical parameters. We have computed minimum embedding dimension (MED), correlation dimension (CD), largest Lyapunov exponent (LLE), and nonlinearity scores (NLS) from HRV time series of eight Chi and four Kundalini meditation practitioners. The pre-meditation state has been used as a baseline (control) state to compare the estimated parameters. The chaotic nature of HRV during both pre-meditation and meditation is confirmed by MED. The meditation state showed a significant decrease in the value of CD and increase in the value of LLE of HRV, in comparison with premeditation state, indicating a less complex and less predictable nature of HRV. In addition, it was shown that the HRV of meditation state is having highest NLS than pre-meditation state. The study indicated highly nonlinear dynamic nature of cardiac states as revealed by HRV during meditation state, rather considering it as a quiescent state.

The Method of Evaluation Artery Diameter from Ultrasound Video

The cardiovascular system has become the most important subject of clinical research, particularly measurement of arterial blood flow. Therefore correct determination of arterial diameter is crucial. We propose a novel, semi-automatic method for artery lumen detection. The method is based on Gaussian probability function. Usability of our proposed method was assessed by analyzing ultrasound B-mode CFA video sequences acquired from eleven healthy volunteers. The correlation coefficient between the manual and semi-automatic measurement of arterial diameter was 0.996. Our proposed method for detecting artery boundary is novel and accurate enough for the measurement of artery diameter.

Analysis of the EEG Signal for a Practical Biometric System

This paper discusses the effectiveness of the EEG signal for human identification using four or less of channels of two different types of EEG recordings. Studies have shown that the EEG signal has biometric potential because signal varies from person to person and impossible to replicate and steal. Data were collected from 10 male subjects while resting with eyes open and eyes closed in 5 separate sessions conducted over a course of two weeks. Features were extracted using the wavelet packet decomposition and analyzed to obtain the feature vectors. Subsequently, the neural networks algorithm was used to classify the feature vectors. Results show that, whether or not the subjects- eyes were open are insignificant for a 4– channel biometrics system with a classification rate of 81%. However, for a 2–channel system, the P4 channel should not be included if data is acquired with the subjects- eyes open. It was observed that for 2– channel system using only the C3 and C4 channels, a classification rate of 71% was achieved.

Eclectic Rule-Extraction from Support Vector Machines

Support vector machines (SVMs) have shown superior performance compared to other machine learning techniques, especially in classification problems. Yet one limitation of SVMs is the lack of an explanation capability which is crucial in some applications, e.g. in the medical and security domains. In this paper, a novel approach for eclectic rule-extraction from support vector machines is presented. This approach utilizes the knowledge acquired by the SVM and represented in its support vectors as well as the parameters associated with them. The approach includes three stages; training, propositional rule-extraction and rule quality evaluation. Results from four different experiments have demonstrated the value of the approach for extracting comprehensible rules of high accuracy and fidelity.

Object Tracking using MACH filter and Optical Flow in Cluttered Scenes and Variable Lighting Conditions

Vision based tracking problem is solved through a combination of optical flow, MACH filter and log r-θ mapping. Optical flow is used for detecting regions of movement in video frames acquired under variable lighting conditions. The region of movement is segmented and then searched for the target. A template is used for target recognition on the segmented regions for detecting the region of interest. The template is trained offline on a sequence of target images that are created using the MACH filter and log r-θ mapping. The template is applied on areas of movement in successive frames and strong correlation is seen for in-class targets. Correlation peaks above a certain threshold indicate the presence of target and the target is tracked over successive frames.

Semi-automatic Background Detection in Microscopic Images

The last years have seen an increasing use of image analysis techniques in the field of biomedical imaging, in particular in microscopic imaging. The basic step for most of the image analysis techniques relies on a background image free of objects of interest, whether they are cells or histological samples, to perform further analysis, such as segmentation or mosaicing. Commonly, this image consists of an empty field acquired in advance. However, many times achieving an empty field could not be feasible. Or else, this could be different from the background region of the sample really being studied, because of the interaction with the organic matter. At last, it could be expensive, for instance in case of live cell analyses. We propose a non parametric and general purpose approach where the background is built automatically stemming from a sequence of images containing even objects of interest. The amount of area, in each image, free of objects just affects the overall speed to obtain the background. Experiments with different kinds of microscopic images prove the effectiveness of our approach.

Moving Area Filter to Detect Object in Video Sequence from Moving Platform

Detecting object in video sequence is a challenging mission for identifying, tracking moving objects. Background removal considered as a basic step in detected moving objects tasks. Dual static cameras placed in front and rear moving platform gathered information which is used to detect objects. Background change regarding with speed and direction moving platform, so moving objects distinguished become complicated. In this paper, we propose framework allows detection moving object with variety of speed and direction dynamically. Object detection technique built on two levels the first level apply background removal and edge detection to generate moving areas. The second level apply Moving Areas Filter (MAF) then calculate Correlation Score (CS) for adjusted moving area. Merging moving areas with closer CS and marked as moving object. Experiment result is prepared on real scene acquired by dual static cameras without overlap in sense. Results showing accuracy in detecting objects compared with optical flow and Mixture Module Gaussian (MMG), Accurate ratio produced to measure accurate detection moving object.

The Use of Project to Enhance Writing Skill

This paper explores the use of project work in a content-based instruction in a Rajabhat University, a teacher college, where student teachers are instructed to perform teaching roles mainly in basic education level. Its aim is to link theory to practice, and to help language teachers maximize the full potential of project work for genuine communication and give real meaning to writing activity. Two research questions are formulated to guide this study: a) What is the academic achievement of the students- writing skill against the 70% attainment target after the use of project to enhance the skill? and b) To what degree is the development of the students- writing skills during the course of project to enhance the skill? The sample of the study comprised of 38 fourth-year English major students. The data was collected by means of achievement test, student writing works, and project diary. The scores in the summative achievement test were analyzed by mean score, standard deviation, and t-test. Project diary serves as students- record of the language acquired during the project. List of structures and vocabulary noted in the diary has shown students- ability to attend to, recognize, and focus on meaningful patterns of language forms.

Improving Worm Detection with Artificial Neural Networks through Feature Selection and Temporal Analysis Techniques

Computer worm detection is commonly performed by antivirus software tools that rely on prior explicit knowledge of the worm-s code (detection based on code signatures). We present an approach for detection of the presence of computer worms based on Artificial Neural Networks (ANN) using the computer's behavioral measures. Identification of significant features, which describe the activity of a worm within a host, is commonly acquired from security experts. We suggest acquiring these features by applying feature selection methods. We compare three different feature selection techniques for the dimensionality reduction and identification of the most prominent features to capture efficiently the computer behavior in the context of worm activity. Additionally, we explore three different temporal representation techniques for the most prominent features. In order to evaluate the different techniques, several computers were infected with five different worms and 323 different features of the infected computers were measured. We evaluated each technique by preprocessing the dataset according to each one and training the ANN model with the preprocessed data. We then evaluated the ability of the model to detect the presence of a new computer worm, in particular, during heavy user activity on the infected computers.

A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.