Automated ECG Segmentation Using Piecewise Derivative Dynamic Time Warping

Electrocardiogram (ECG) segmentation is necessary to help reduce the time consuming task of manually annotating ECG's. Several algorithms have been developed to segment the ECG automatically. We first review several of such methods, and then present a new single lead segmentation method based on Adaptive piecewise constant approximation (APCA) and Piecewise derivative dynamic time warping (PDDTW). The results are tested on the QT database. We compared our results to Laguna's two lead method. Our proposed approach has a comparable mean error, but yields a slightly higher standard deviation than Laguna's method.

Vortex Wake Formation and Its Effects on Thrust and Propulsive Efficiency of an Oscillating Airfoil

Flows over a harmonically oscillating NACA 0012 airfoil are simulated here using a two-dimensional, unsteady, incompressibleNavier-Stokes solver.Both pure-plunging and pitching-plunging combined oscillations are considered at a Reynolds number of 5000. Special attention is paid to the vortex shedding and interaction mechanism of the motions. For all the simulations presented here, the reduced frequency (k) is fixed at a value of 2.5 and plunging amplitude (h) is selected to be in the range of 0.2-0.5. The simulation results show that the interaction mechanism between the leading and trailing edge vortices has a decisive effect on the values of the resulting thrust and propulsive efficiency.

A Hybrid Approach for Quantification of Novelty in Rule Discovery

Rule Discovery is an important technique for mining knowledge from large databases. Use of objective measures for discovering interesting rules lead to another data mining problem, although of reduced complexity. Data mining researchers have studied subjective measures of interestingness to reduce the volume of discovered rules to ultimately improve the overall efficiency of KDD process. In this paper we study novelty of the discovered rules as a subjective measure of interestingness. We propose a hybrid approach that uses objective and subjective measures to quantify novelty of the discovered rules in terms of their deviations from the known rules. We analyze the types of deviation that can arise between two rules and categorize the discovered rules according to the user specified threshold. We implement the proposed framework and experiment with some public datasets. The experimental results are quite promising.

AI Applications to Metal Stamping Die Design– A Review

Metal stamping die design is a complex, experiencebased and time-consuming task. Various artificial intelligence (AI) techniques are being used by worldwide researchers for stamping die design to reduce complexity, dependence on human expertise and time taken in design process as well as to improve design efficiency. In this paper a comprehensive review of applications of AI techniques in manufacturability evaluation of sheet metal parts, die design and process planning of metal stamping die is presented. Further the salient features of major research work published in the area of metal stamping are presented in tabular form and scope of future research work is identified.

A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms

Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.

Chemical Degradation of Dieldrin using Ferric Sulfide and Iron Powder

The chemical degradation of dieldrin in ferric sulfide and iron powder aqueous suspension was investigated in laboratory batch type experiments. To identify the reaction mechanism, reduced copper was used as reductant. More than 90% of dieldrin was degraded using both reaction systems after 29 days. Initial degradation rate of the pesticide using ferric sulfide was superior to that using iron powder. The reaction schemes were completely dissimilar even though the ferric ion plays an important role in both reaction systems. In the case of metallic iron powder, dieldrin undergoes partial dechlorination. This reaction proceeded by reductive hydrodechlorination with the generation of H+, which arise by oxidation of ferric iron. This reductive reaction was accelerated by reductant but mono-dechlorination intermediates were accumulated. On the other hand, oxidative degradation was observed in the reaction with ferric sulfide, and the stable chemical structure of dieldrin was decomposed into water-soluble intermediates. These reaction intermediates have no chemical structure of drin class. This dehalogenation reaction assumes to occur via the adsorbed hydroxyl radial generated on the surface of ferric sulfide.

Edit Distance Algorithm to Increase Storage Efficiency of Javanese Corpora

Since the one-to-one word translator does not have the facility to translate pragmatic aspects of Javanese, the parallel text alignment model described uses a phrase pair combination. The algorithm aligns the parallel text automatically from the beginning to the end of each sentence. Even though the results of the phrase pair combination outperform the previous algorithm, it is still inefficient. Recording all possible combinations consume more space in the database and time consuming. The original algorithm is modified by applying the edit distance coefficient to improve the data-storage efficiency. As a result, the data-storage consumption is 90% reduced as well as its learning period (42s).

Treatment of Recycled Concrete Aggregates by Si-Based Polymers

The recycling of concrete, bricks and masonry rubble as concrete aggregates is an important way to contribute to a sustainable material flow. However, there are still various uncertainties limiting the widespread use of Recycled Concrete Aggregates (RCA). The fluctuations in the composition of grade recycled aggregates and their influence on the properties of fresh and hardened concrete are of particular concern regarding the use of RCA. Most of problems occurring while using recycled concrete aggregates as aggregates are due to higher porosity and hence higher water absorption, lower mechanical strengths, residual impurities on the surface of the RCA forming weaker bond between cement paste and aggregate. So, the reuse of RCA is still limited. Efficient polymer based treatment is proposed in order to reuse RCA easier. The silicon-based polymer treatments of RCA were carried out and were compared. This kind of treatment can improve the properties of RCA such as the rate of water absorption on treated RCA is significantly reduced.

The Correlation between Peer Aggression and Peer Victimization: Are Aggressors Victims Too?

To investigate the possible correlation between peer aggression and peer victimization, 148 sixth-graders were asked to respond to the Reduced Aggression and Victimization Scales (RAVS). RAVS measures the frequency of reporting aggressive behaviors or of being victimized during the previous week prior to the survey. The scales are composed of six items each. Each point represents one instance of aggression or victimization. Specifically, the Pearson Product-Moment Correlation Coefficient (PMCC) was used to determine the correlations between the scores of the sixthgraders in the two scales, both in individual items and total scores. Positive correlations were established and correlations were significant at the 0.01 levels.

Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

An Efficient Multi Join Algorithm Utilizing a Lattice of Double Indices

In this paper, a novel multi join algorithm to join multiple relations will be introduced. The novel algorithm is based on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.

The Development of Positive Emotion Regulation Strategies Scale for Children and Adolescents

The study was designed to develop a measurement of the positive emotion regulation questionnaire (PERQ) that assesses positive emotion regulation strategies through self-report. The 14 items developed for the surveying instrument of the study were based upon literatures regarding elements of positive regulation strategies. 319 elementary students (age ranging from 12 to14) were recruited among three public elementary schools to survey on their use of positive emotion regulation strategies. Of 319 subjects, 20 invalid questionnaire s yielded a response rate of 92%. The data collected wasanalyzed through methods such as item analysis, factor analysis, and structural equation models. In reference to the results from item analysis, the formal survey instrument was reduced to 11 items. A principal axis factor analysis with varimax was performed on responses, resulting in a 2-factor equation (savoring strategy and neutralizing strategy), which accounted for 55.5% of the total variance. Then, the two-factor structure of scale was also identified by structural equation models. Finally, the reliability coefficients of the two factors were Cronbach-s α .92 and .74. Gender difference was only found in savoring strategy. In conclusion, the positive emotion regulation strategies questionnaire offers a brief, internally consistent, and valid self-report measure for understanding the emotional regulation strategies of children that may be useful to researchers and applied professionals.

A Smart-Visio Microphone for Audio-Visual Speech Recognition “Vmike“

The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.

Feature Based Dense Stereo Matching using Dynamic Programming and Color

This paper presents a new feature based dense stereo matching algorithm to obtain the dense disparity map via dynamic programming. After extraction of some proper features, we use some matching constraints such as epipolar line, disparity limit, ordering and limit of directional derivative of disparity as well. Also, a coarseto- fine multiresolution strategy is used to decrease the search space and therefore increase the accuracy and processing speed. The proposed method links the detected feature points into the chains and compares some of the feature points from different chains, to increase the matching speed. We also employ color stereo matching to increase the accuracy of the algorithm. Then after feature matching, we use the dynamic programming to obtain the dense disparity map. It differs from the classical DP methods in the stereo vision, since it employs sparse disparity map obtained from the feature based matching stage. The DP is also performed further on a scan line, between any matched two feature points on that scan line. Thus our algorithm is truly an optimization method. Our algorithm offers a good trade off in terms of accuracy and computational efficiency. Regarding the results of our experiments, the proposed algorithm increases the accuracy from 20 to 70%, and reduces the running time of the algorithm almost 70%.

Tehran-Tabriz Intelligent Highway

The need to implement intelligent highways is much more emphasized with the growth of vehicle production line as well as vehicle intelligence. The control of intelligent vehicles in order to reduce human error and boost ease congestion is not accomplished solely by the aid of human resources. The present article is an attempt to introduce an intelligent control system based on a single central computer. In this project, central computer, without utilizing Global Positioning System (GPS), is capable of tracking all vehicles, crisis management and control, traffic guidance and recording traffic crimes along the highway. By the help of RFID technology, vehicles are connected to computerized systems, intelligent light poles and other available hardware along the way. By the aid of Wimax communicative technology, all components of the system are virtually connected together through local and global networks devised in them and the energy of the network is provided by the solar cells installed on the intelligent light poles.

Improving the Effectiveness of Software Testing through Test Case Reduction

This paper proposes a new technique for improving the efficiency of software testing, which is based on a conventional attempt to reduce test cases that have to be tested for any given software. The approach utilizes the advantage of Regression Testing where fewer test cases would lessen time consumption of the testing as a whole. The technique also offers a means to perform test case generation automatically. Compared to one of the techniques in the literature where the tester has no option but to perform the test case generation manually, the proposed technique provides a better option. As for the test cases reduction, the technique uses simple algebraic conditions to assign fixed values to variables (Maximum, minimum and constant variables). By doing this, the variables values would be limited within a definite range, resulting in fewer numbers of possible test cases to process. The technique can also be used in program loops and arrays.

Hopfield Network as Associative Memory with Multiple Reference Points

Hopfield model of associative memory is studied in this work. In particular, two main problems that it possesses: the apparition of spurious patterns in the learning phase, implying the well-known effect of storing the opposite pattern, and the problem of its reduced capacity, meaning that it is not possible to store a great amount of patterns without increasing the error probability in the retrieving phase. In this paper, a method to avoid spurious patterns is presented and studied, and an explanation of the previously mentioned effect is given. Another technique to increase the capacity of a network is proposed here, based on the idea of using several reference points when storing patterns. It is studied in depth, and an explicit formula for the capacity of the network with this technique is provided.

Hybrid Genetic-Simulated Annealing Approach for Fractal Image Compression

In this paper a hybrid technique of Genetic Algorithm and Simulated Annealing (HGASA) is applied for Fractal Image Compression (FIC). With the help of this hybrid evolutionary algorithm effort is made to reduce the search complexity of matching between range block and domain block. The concept of Simulated Annealing (SA) is incorporated into Genetic Algorithm (GA) in order to avoid pre-mature convergence of the strings. One of the image compression techniques in the spatial domain is Fractal Image Compression but the main drawback of FIC is that it involves more computational time due to global search. In order to improve the computational time along with acceptable quality of the decoded image, HGASA technique has been proposed. Experimental results show that the proposed HGASA is a better method than GA in terms of PSNR for Fractal image Compression.

Effect of Amplitude and Mean Angle of Attack on Wake of an Oscillating Airfoil

The unsteady wake of an EPPLER 361 airfoil in pitching motion has been investigated in a subsonic wind tunnel by hot-wire anemometry. The airfoil was given the pitching motion about the one-quarter chord axis at reduced frequency of 0182. Streamwise mean velocity profiles (wake profiles) were investigated at several vertically aligned points behind the airfoil at one-quarter chord downstream distance from trailing edge. Oscillation amplitude and mean angle of attack were varied to determine the effects on wake profiles. When the maximum dynamic angle of attack was below the static stall angle of attack, weak effects on wake were found by increasing oscillation amplitude and mean angle of attack. But, for higher angles of attack strong unsteady effects were appeared on the wake.

Simultaneous Determination of Reference Free-Stream Temperature and Convective Heat Transfer Coefficient

It is very important to determine reference temperature when convective temperature because it should be used to calculate the temperature potential. This paper deals with the development of a new method that can determine heat transfer coefficient and reference free stream temperature simultaneously, based on transient heat transfer experiments with using two narrow band thermo-tropic liquid crystals (TLC's). The method is validated through error analysis in terms of the random uncertainties in the measured temperatures. It is shown how the uncertainties in heat transfer coefficient and free stream temperature can be reduced. The general method described in this paper is applicable to many heat transfer models with unknown free stream temperature.