Development of an Infrared Thermography Method with CO2 Laser Excitation, Applied to Defect Detection in CFRP

This paper presents a NDT by infrared thermography with excitation CO2 Laser, wavelength of 10.6 μm. This excitation is the controllable heating beam, confirmed by a preliminary test on a wooden plate 1.2 m x 0.9 m x 1 cm. As the first practice, this method is applied to detecting the defect in CFRP heated by the Laser 300 W during 40 s. Two samples 40 cm x 40 cm x 4.5 cm are prepared, one with defect, another one without defect. The laser beam passes through the lens of a deviation device, and heats the samples placed at a determinate position and area. As a result, the absence of adhesive can be detected. This method displays prominently its application as NDT with the composite materials. This work gives a good perspective to characterize the laser beam, which is very useful for the next detection campaigns.

Digital Redesign of Interval Systems via Particle Swarm Optimization

In this paper, a PSO-based approach is proposed to derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital system and its continuous counterpart, and subsequently optimized by a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.

Automated Segmentation of ECG Signals using Piecewise Derivative Dynamic Time Warping

Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG-s. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna-s two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna-s method.

Fractal Analysis of 16S rRNA Gene Sequences in Archaea Thermophiles

A nucleotide sequence can be expressed as a numerical sequence when each nucleotide is assigned its proton number. A resulting gene numerical sequence can be investigated for its fractal dimension in terms of evolution and chemical properties for comparative studies. We have investigated such nucleotide fluctuation in the 16S rRNA gene of archaea thermophiles. The studied archaea thermophiles were archaeoglobus fulgidus, methanothermobacter thermautotrophicus, methanocaldococcus jannaschii, pyrococcus horikoshii, and thermoplasma acidophilum. The studied five archaea-euryarchaeota thermophiles have fractal dimension values ranging from 1.93 to 1.97. Computer simulation shows that random sequences would have an average of about 2 with a standard deviation about 0.015. The fractal dimension was found to correlate (negative correlation) with the thermophile-s optimal growth temperature with R2 value of 0.90 (N =5). The inclusion of two aracheae-crenarchaeota thermophiles reduces the R2 value to 0.66 (N = 7). Further inclusion of two bacterial thermophiles reduces the R2 value to 0.50 (N =9). The fractal dimension is correlated (positive) to the sequence GC content with an R2 value of 0.89 for the five archaea-euryarchaeota thermophiles (and 0.74 for the entire set of N = 9), although computer simulation shows little correlation. The highest correlation (positive) was found to be between the fractal dimension and di-nucleotide Shannon entropy. However Shannon entropy and sequence GC content were observed to correlate with optimal growth temperature having an R2 of 0.8 (negative), and 0.88 (positive), respectively, for the entire set of 9 thermophiles; thus the correlation lacks species specificity. Together with another correlation study of bacterial radiation dosage with RecA repair gene sequence fractal dimension, it is postulated that fractal dimension analysis is a sensitive tool for studying the relationship between genotype and phenotype among closely related sequences.

Residence Time Distribution in a Two Impinging Streams Cyclone Reactor: CFD Prediction and Experimental Validation

The quantified residence time distribution (RTD) provides a numerical characterization of mixing in a reactor, thus allowing the process engineer to better understand mixing performance of the reactor.This paper discusses computational studies to investigate flow patterns in a two impinging streams cyclone reactor(TISCR) . Flow in the reactor was modeled with computational fluid dynamics (CFD). Utilizing the Eulerian- Lagrangian approach, implemented in FLUENT (V6.3.22), particle trajectories were obtained by solving the particle force balance equations. From simulation results obtained at different Δts, the mean residence time (tm) and the mean square deviation (σ2) were calculated. a good agreement can be observed between predicted and experimental data. Simulation results indicate that the behavior of complex reactor systems can be predicted using the CFD technique with minimum data requirement for validation.

Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Evaluating the Response of Rainfed-Chickpea to Population Density in Iran, Using Simulation

The response of growth and yield of rainfed-chickpea to population density should be evaluated based on long-term experiments to include the climate variability. This is achievable just by simulation. In this simulation study, this evaluation was done by running the CYRUS model for long-term daily weather data of five locations in Iran. The tested population densities were 7 to 59 (with interval of 2) stands per square meter. Various functions, including quadratic, segmented, beta, broken linear, and dent-like functions, were tested. Considering root mean square of deviations and linear regression statistics [intercept (a), slope (b), and correlation coefficient (r)] for predicted versus observed variables, the quadratic and broken linear functions appeared to be appropriate for describing the changes in biomass and grain yield, and in harvest index, respectively. Results indicated that in all locations, grain yield tends to show increasing trend with crowding the population, but subsequently decreases. This was also true for biomass in five locations. The harvest index appeared to have plateau state across low population densities, but decreasing trend with more increasing density. The turning point (optimum population density) for grain yield was 30.68 stands per square meter in Isfahan, 30.54 in Shiraz, 31.47 in Kermanshah, 34.85 in Tabriz, and 32.00 in Mashhad. The optimum population density for biomass ranged from 24.6 (in Tabriz) to 35.3 stands per square meter (Mashhad). For harvest index it varied between 35.87 and 40.12 stands per square meter.

New Multi-Solid Thermodynamic Model for the Prediction of Wax Formation

In the previous multi-solid models,¤ò approach is used for the calculation of fugacity in the liquid phase. For the first time, in the proposed multi-solid thermodynamic model,γ approach has been used for calculation of fugacity in the liquid mixture. Therefore, some activity coefficient models have been studied that the results show that the predictive Wilson model is more appropriate than others. The results demonstrate γ approach using the predictive Wilson model is in more agreement with experimental data than the previous multi-solid models. Also, by this method, generates a new approach for presenting stability analysis in phase equilibrium calculations. Meanwhile, the run time in γ approach is less than the previous methods used ¤ò approach. The results of the new model present 0.75 AAD % (Average Absolute Deviation) from the experimental data which is less than the results error of the previous multi-solid models obviously.

Experimental Analysis on Electrical and Photometric Performances of Commercially Available Integrated Compact Fluorescent Lamp

Lighting upgrades involve relatively lower costs which allow the benefits to be spread more widely than is possible with any other energy efficiency measure. In order to popularize the adoption of CFL in Taiwan, the authority proposes to implement a new energy efficient lamp comparative label system. The current study was accordingly undertaken to investigate the factors affecting the performance and the deviation of actual and labeled performance of commercially available integrated CFLs. In this paper, standard test methods to determine the electrical and photometric performances of CFL were developed based on CIE 84-1989 and CIE 60901-1987, then 55 selected CFLs from market were tested. The results show that with higher color temperature of CFLs lower efficacy are achieved. It was noticed that the most packaging of CFL often lack the information of Color Rendering Index. Also, there was no correlation between price and performance of the CFLs was indicated in this work. The results of this paper might help consumers to make more informed CFL-purchasing decisions.

Experiment Study on the Plasma Parameters Measurement in Backflow Region of Ion Thruster

The charge-exchange xenon (CEX) ion generated by ion thruster can backflow to the surface of spacecraft and threaten to the safety of spacecraft operation. In order to evaluate the effects of the induced plasma environment in backflow regions on the spacecraft, we designed a spherical single Langmuir probe of 5.8cm in diameter for measuring low-density plasma parameters in backflow region of ion thruster. In practice, the tests are performed in a two-dimensional array (40cm×60cm) composed of 20 sites. The experiment results illustrate that the electron temperature ranges from 3.71eV to 3.96eV, with the mean value of 3.82eV and the standard deviation of 0.064eV. The electron density ranges from 8.30×1012/m3 to 1.66×1013/m3, with the mean value of 1.30×1013/m3 and the standard deviation of 2.15×1012/m3. All data is analyzed according to the “ideal" plasma conditions of Maxwellian distributions.

A Method for Identifying Physical Parameters with Linear Fractional Transformation

This paper proposes a new parameter identification method based on Linear Fractional Transformation (LFT). It is assumed that the target linear system includes unknown parameters. The parameter deviations are separated from a nominal system via LFT, and identified by organizing I/O signals around the separated deviations of the real system. The purpose of this paper is to apply LFT to simultaneously identify the parameter deviations in systems with fewer outputs than unknown parameters. As a fundamental example, this method is implemented to one degree of freedom vibratory system. Via LFT, all physical parameters were simultaneously identified in this system. Then, numerical simulations were conducted for this system to verify the results. This study shows that all the physical parameters of a system with fewer outputs than unknown parameters can be effectively identified simultaneously using LFT.

Enhanced Efficacy of Kinetic Power Transform for High-Speed Wind Field

The three-time-scale plant model of a wind power generator, including a wind turbine, a flexible vertical shaft, a Variable Inertia Flywheel (VIF) module, an Active Magnetic Bearing (AMB) unit and the applied wind sequence, is constructed. In order to make the wind power generator be still able to operate as the spindle speed exceeds its rated speed, the VIF is equipped so that the spindle speed can be appropriately slowed down once any stronger wind field is exerted. To prevent any potential damage due to collision by shaft against conventional bearings, the AMB unit is proposed to regulate the shaft position deviation. By singular perturbation order-reduction technique, a lower-order plant model can be established for the synthesis of feedback controller. Two major system parameter uncertainties, an additive uncertainty and a multiplicative uncertainty, are constituted by the wind turbine and the VIF respectively. Frequency Shaping Sliding Mode Control (FSSMC) loop is proposed to account for these uncertainties and suppress the unmodeled higher-order plant dynamics. At last, the efficacy of the FSSMC is verified by intensive computer and experimental simulations for regulation on position deviation of the shaft and counter-balance of unpredictable wind disturbance.

The Same or Not the Same - On the Variety of Mechanisms of Path Dependence

In association with path dependence, researchers often talk of institutional “lock-in", thereby indicating that far-reaching path deviation or path departure are to be regarded as exceptional cases. This article submits the alleged general inclination for stability of path-dependent processes to a critical review. The different reasons for path dependence found in the literature indicate that different continuity-ensuring mechanisms are at work when people talk about path dependence (“increasing returns", complementarity, sequences etc.). As these mechanisms are susceptible to fundamental change in different ways and to different degrees, the path dependence concept alone is of only limited explanatory value. It is therefore indispensable to identify the underlying continuity-ensuring mechanism as well if a statement-s empirical value is to go beyond the trivial, always true “history matters".

Power System Damping Using Hierarchical Fuzzy Multi- Input Power System Stabilizer and Static VAR Compensator

This paper proposes the application of a hierarchical fuzzy system (HFS) based on multi-input power system stabilizer (MPSS) and also Static Var Compensator (SVC) in multi-machine environment.The number of rules grows exponentially with the number of variables in a conventional fuzzy logic system. The proposed HFS method is developed to solve this problem. To reduce the number of rules the HFS consists of a number of low-dimensional fuzzy systems in a hierarchical structure. In fact, by using HFS the total number of involved rules increases only linearly with the number of input variables. In the MPSS, to have better efficiency an auxiliary signal of reactive power deviation (ΔQ) is added with ΔP+ Δω input type Power system stabilizer (PSS). Phasor model of SVC is described and used in this paper. The performances of MPSS, Conventional power system stabilizer (CPSS), hierarchical Fuzzy Multi-input Power System Stabilizer (HFMPSS) and the proposed method in damping inter-area mode of oscillation are examined in response to disturbances. By using digital simulations the comparative study is illustrated. It can be seen that the proposed PSS is performing satisfactorily within the whole range of disturbances.

An Anomaly Detection Approach to Detect Unexpected Faults in Recordings from Test Drives

In the automotive industry test drives are being conducted during the development of new vehicle models or as a part of quality assurance of series-production vehicles. The communication on the in-vehicle network, data from external sensors, or internal data from the electronic control units is recorded by automotive data loggers during the test drives. The recordings are used for fault analysis. Since the resulting data volume is tremendous, manually analysing each recording in great detail is not feasible. This paper proposes to use machine learning to support domainexperts by preventing them from contemplating irrelevant data and rather pointing them to the relevant parts in the recordings. The underlying idea is to learn the normal behaviour from available recordings, i.e. a training set, and then to autonomously detect unexpected deviations and report them as anomalies. The one-class support vector machine “support vector data description” is utilised to calculate distances of feature vectors. SVDDSUBSEQ is proposed as a novel approach, allowing to classify subsequences in multivariate time series data. The approach allows to detect unexpected faults without modelling effort as is shown with experimental results on recordings from test drives.

Solubility of Organics in Water and Silicon Oil: A Comparative Study

The aim of this study was to compare the solubility of selected volatile organic compounds in water and silicon oil using the simple static headspace method. The experimental design allowed equilibrium achievement within 30 – 60 minutes. Infinite dilution activity coefficients and Henry-s law constants for various organics representing esters, ketones, alkanes, aromatics, cycloalkanes and amines were measured at 303K. The measurements were reproducible with a relative standard deviation and coefficient of variation of 1.3x10-3 and 1.3 respectively. The static determined activity coefficients using shaker flasks were reasonably comparable to those obtained using the gas liquid - chromatographic technique and those predicted using the group contribution methods mainly the UNIFAC. Silicon oil chemically known as polydimethysiloxane was found to be better absorbent for VOCs than water which quickly becomes saturated. For example the infinite dilution mole fraction based activity coefficients of hexane is 0.503 and 277 000 in silicon oil and water respectively. Thus silicon oil gives a superior factor of 550 696. Henry-s law constants and activity coefficients at infinite dilution play a significant role in the design of scrubbers for abatement of volatile organic compounds from contaminated air streams. This paper presents the phase equilibrium of volatile organic compounds in very dilute aqueous and polymeric solutions indicating the movement and fate of chemical in air and solvent. The successful comparison of the results obtained here and those obtained using other methods by the same authors and in literature, means that the results obtained here are reliable.

Awareness of Reading Strategies among EFL Learners at Bangkok University

This questionnaire-based study, aimed to measure and compare the awareness of English reading strategies among EFL learners at Bangkok University (BU) classified by their gender, field of study, and English learning experience. Proportional stratified random sampling was employed to formulate a sample of 380 BU students. The data were statistically analyzed in terms of the mean and standard deviation. t-Test analysis was used to find differences in awareness of reading strategies between two groups (-male and female- /-science and social-science students). In addition, one-way analysis of variance (ANOVA) was used to compare reading strategy awareness among BU students with different lengths of English learning experience. The results of this study indicated that the overall awareness of reading strategies of EFL learners at BU was at a high level (ðÑ = 3.60) and that there was no statistically significant difference between males and females, and among students who have different lengths of English learning experience at the significance level of 0.05. However, significant differences among students coming from different fields of study were found at the same level of significance.

Automated ECG Segmentation Using Piecewise Derivative Dynamic Time Warping

Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG's. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna's two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna's method.

A Hybrid Approach for Quantification of Novelty in Rule Discovery

Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules lead to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach that uses objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules. We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are quite promising.

On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining

One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.