Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Seismic Vulnerability Mitigation of Non-Engineered Buildings

The tremendous loss of life that resulted in the aftermath of recent earthquakes in developing countries is mostly due to the collapse of non-engineered and semi-engineered building structures. Such structures are used as houses, schools, primary healthcare centers and government offices. These building are classified structurally into two categories viz. non-engineered and semi-engineered. Non-engineered structures include: adobe, unreinforced masonry (URM) and wood buildings. Semi-engineered buildings are mostly low-rise (up to 3 story) light concrete frame structures or masonry bearing walls with reinforced concrete slab. This paper presents an overview of the typical damage observed in non-engineered structures and their most likely causes in the past earthquakes with specific emphasis on the performance of such structures in the 2005 Kashmir earthquake. It is demonstrated that seismic performance of these structures can be improved from life-safety viewpoint by adopting simple low-cost modifications to the existing construction practices. Incorporation of some of these practices in the reconstruction efforts after the 2005 Kashmir earthquake are examined in the last section for mitigating seismic risk hazard.

Comparative Study on Swarm Intelligence Techniques for Biclustering of Microarray Gene Expression Data

Microarray gene expression data play a vital in biological processes, gene regulation and disease mechanism. Biclustering in gene expression data is a subset of the genes indicating consistent patterns under the subset of the conditions. Finding a biclustering is an optimization problem. In recent years, swarm intelligence techniques are popular due to the fact that many real-world problems are increasingly large, complex and dynamic. By reasons of the size and complexity of the problems, it is necessary to find an optimization technique whose efficiency is measured by finding the near optimal solution within a reasonable amount of time. In this paper, the algorithmic concepts of the Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL) and Cuckoo Search (CS) algorithms have been analyzed for the four benchmark gene expression dataset. The experiment results show that CS outperforms PSO and SFL for 3 datasets and SFL give better performance in one dataset. Also this work determines the biological relevance of the biclusters with Gene Ontology in terms of function, process and component.

Analysis of Electrocardiograph (ECG) Signal for the Detection of Abnormalities Using MATLAB

The proposed method is to study and analyze Electrocardiograph (ECG) waveform to detect abnormalities present with reference to P, Q, R and S peaks. The first phase includes the acquisition of real time ECG data. In the next phase, generation of signals followed by pre-processing. Thirdly, the procured ECG signal is subjected to feature extraction. The extracted features detect abnormal peaks present in the waveform Thus the normal and abnormal ECG signal could be differentiated based on the features extracted. The work is implemented in the most familiar multipurpose tool, MATLAB. This software efficiently uses algorithms and techniques for detection of any abnormalities present in the ECG signal. Proper utilization of MATLAB functions (both built-in and user defined) can lead us to work with ECG signals for processing and analysis in real time applications. The simulation would help in improving the accuracy and the hardware could be built conveniently.

Estimation of Fecundity and Gonadosomatic Index of Terapon jarbua from Pondicherry Coast, India

In the present study fecundity of Terapon jarbua was estimated for 41 matured females from the Bay of Bengal, Pondicherry. The fecundity (F) was found to range from 13,475 to 115,920 in fishes between 173-278mm Total length (TL) and 65- 298 gm weight respectively. The co-efficient of correlation for F/TL (log F = - 4.821 + 4.146 log TL), F/SL (log F = -3.936 + 3.867 log SL), F/WF (log F = 1.229 + 0.730 log TW) and F/GW (log F = 0.724 + 1.113 log GW) were obtained as 0.474, 0.537, 0.641 and 0.908 respectively. The regression line for the TL, SL, WF and GW of the fishes were found to be linear when they were plotted against their fecundity on logarithmic scales. Highly significant (P

An Assessment of the Effects of Microbial Products on the Specific Oxygen Uptake in Submerged Membrane Bioreactor

Sustaining a desired rate of oxygen transfer for microbial activity is a matter of major concern for biological wastewater treatment (MBR). The study reported in the paper was aimed at assessing the effects of microbial products on the specific oxygen uptake rate (SOUR) in a conventional membrane bioreactor (CMBR) and that in a sponge submerged MBR (SSMBR). The production and progressive accumulation of soluble microbial products (SMP) and bound-extracellular polymeric substances (bEPS) were affecting the SOUR of the microorganisms which varied at different stages of operation of the MBR systems depending on the variable concentrations of the SMP/bEPS. The effect of bEPS on the SOUR was stronger in the SSMBR compared to that of the SMP, while relative high concentrations of SMP had adverse effects on the SOUR of the CMBR system. Of the different mathematical correlations analyzed in the study, logarithmic mathematical correlations could be established between SOUR and bEPS in SSMBR, and similar correlations could also be found between SOUR and SMP concentrations in the CMBR.

Entropic Measures of a Probability Sample Space and Exponential Type (α, β) Entropy

Entropy is a key measure in studies related to information theory and its many applications. Campbell for the first time recognized that the exponential of the Shannon’s entropy is just the size of the sample space, when distribution is uniform. Here is the idea to study exponentials of Shannon’s and those other entropy generalizations that involve logarithmic function for a probability distribution in general. In this paper, we introduce a measure of sample space, called ‘entropic measure of a sample space’, with respect to the underlying distribution. It is shown in both discrete and continuous cases that this new measure depends on the parameters of the distribution on the sample space - same sample space having different ‘entropic measures’ depending on the distributions defined on it. It was noted that Campbell’s idea applied for R`enyi’s parametric entropy of a given order also. Knowing that parameters play a role in providing suitable choices and extended applications, paper studies parametric entropic measures of sample spaces also. Exponential entropies related to Shannon’s and those generalizations that have logarithmic functions, i.e. are additive have been studies for wider understanding and applications. We propose and study exponential entropies corresponding to non additive entropies of type (α, β), which include Havard and Charvˆat entropy as a special case.

Unsteady Heat and Mass Transfer in MHD Flow of Nanofluids over Stretching Sheet with a Non-Uniform Heat Source/Sink

In this paper, the problem of heat and mass transfer in unsteady MHD boundary-layer flow of nanofluids over stretching sheet with a non uniform heat source/sink is considered. The unsteadiness in the flow and temperature is caused by the time-dependent stretching velocity and surface temperature. The unsteady boundary layer equations are transformed to a system of non-linear ordinary differential equations and solved numerically using Keller box method. The velocity, temperature, and concentration profiles were obtained and utilized to compute the skin-friction coefficient, local Nusselt number, and local Sherwood number for different values of the governing parameters viz. solid volume fraction parameter, unsteadiness parameter, magnetic field parameter, Schmidt number, space-dependent and temperature-dependent parameters for heat source/sink. A comparison of the numerical results of the present study with previously published data revealed an excellent agreement.

Synthesis, Characterization and Physico–Chemical Properties of Nano Zinc Oxide and PVA Composites

Polymer nanocomposites represent a new class of materials in which nanomaterials act as the reinforcing material in composites, wherein small additions of nanomaterials lead to large enhancements in thermal, optical and mechanical properties. A boost in these properties is due to the large interfacial area per unit volume or weight of the nanoparticles and the interactions between the particle and the polymer. Micro sized particles used as reinforcing agents scatter light, thus reducing light transmittance and optical clarity. Efficient nanoparticle dispersion combined with good polymer–particle interfacial adhesion eliminates scattering and allows the exciting possibility of developing strong yet transparent films, coatings and membranes. This paper aims at synthesising zinc oxide nanoparticles which are reinforced in poly vinyl alcohol (PVA) polymer. The mechanical properties showed that the tensile strength of the PVA nanocomposites increases with the increase in the amount of nanoparticles.

Compressive Strength Evaluation of Underwater Concrete Structures Integrating the Combination of Rebound Hardness and Ultrasonic Pulse Velocity Methods with Artificial Neural Networks

In this study, two kinds of nondestructive evaluation  (NDE) techniques (rebound hardness and ultrasonic pulse velocity  methods) are investigated for the effective maintenance of underwater  concrete structures. A new methodology to estimate the underwater  concrete strengths more effectively, named “artificial neural network  (ANN) – based concrete strength estimation with the combination of  rebound hardness and ultrasonic pulse velocity methods” is proposed  and verified throughout a series of experimental works.  

A Review of Pharmacological Prevention of Peri-and Post-Procedural Myocardial Injury after Percutaneous Coronary Intervention

The concept of myocardial injury, although first recognized from animal studies, is now recognized as a clinical phenomenon that may result in microvascular damage, no-reflow phenomenon, myocardial stunning, myocardial hibernation and ischemic preconditioning. The final consequence of this event is left ventricular (LV) systolic dysfunction leading to increased morbidity and mortality. The typical clinical case of reperfusion injury occurs in acute myocardial infarction (MI) with ST segment elevation in which an occlusion of a major epicardial coronary artery is followed by recanalization of the artery. This may occur spontaneously or by means of thrombolysis and/or by primary percutaneous coronary intervention (PCI) with efficient platelet inhibition by aspirin (acetylsalicylic acid), clopidogrel and glycoprotein IIb/IIIa inhibitors. In recent years, percutaneous coronary intervention (PCI) has become a well-established technique for the treatment of coronary artery disease. PCI improves symptoms in patients with coronary artery disease and it has been increasing safety of procedures. However, peri- and post-procedural myocardial injury, including angiographical slow coronary flow, microvascular embolization, and elevated levels of cardiac enzyme, such as creatine kinase and troponin-T and -I, has also been reported even in elective cases. Furthermore, myocardial reperfusion injury at the beginning of myocardial reperfusion, which causes tissue damage and cardiac dysfunction, may occur in cases of acute coronary syndrome. Because patients with myocardial injury is related to larger myocardial infarction and have a worse long-term prognosis than those without myocardial injury, it is important to prevent myocardial injury during and/or after PCI in patients with coronary artery disease. To date, many studies have demonstrated that adjunctive pharmacological treatment suppresses myocardial injury and increases coronary blood flow during PCI procedures. In this review, we highlight the usefulness of pharmacological treatment in combination with PCI in attenuating myocardial injury in patients with coronary artery disease.

Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Quantum Enhanced Correlation Matrix Memories via States Orthogonalisation

This paper introduces a Quantum Correlation Matrix Memory (QCMM) and Enhanced QCMM (EQCMM), which are useful to work with quantum memories. A version of classical Gram-Schmidt orthogonalisation process in Dirac notation (called Quantum Orthogonalisation Process: QOP) is presented to convert a non-orthonormal quantum basis, i.e., a set of non-orthonormal quantum vectors (called qudits) to an orthonormal quantum basis, i.e., a set of orthonormal quantum qudits. This work shows that it is possible to improve the performance of QCMM thanks QOP algorithm. Besides, the EQCMM algorithm has a lot of additional fields of applications, e.g.: Steganography, as a replacement Hopfield Networks, Bilevel image processing, etc. Finally, it is important to mention that the EQCMM is an extremely easy to implement in any firmware.

Cold Analysis for Dispersion, Attenuation and RF Efficiency Characteristics of a Gyrotron Cavity

In the present paper, a gyrotron cavity is analyzed in the absence of electron beam for dispersion, attenuation and RF efficiency. For all these characteristics, azimuthally symmetric TE0n modes have been considered. The attenuation characteristics for TE0n modes indicated decrease in attenuation constant as the frequency is increased. Interestingly, the lowest order TE01 mode resulted in lowest attenuation. Further, three different cavity wall materials have been selected for attenuation characteristics. The cavity made of material with higher conductivity resulted in lower attenuation. The effect of material electrical conductivity on the RF efficiency has also been observed and has been found that the RF efficiency rapidly decreases as the electrical conductivity of the cavity material decreases. The RF efficiency rapidly decreases with increasing diffractive quality factor. The ohmic loss variation as a function of frequency of operation for three different cavities made of copper, aluminum and nickel has been observed. The ohmic losses are lowest for the copper cavity and hence the highest RF efficiency.

Energy Efficient Transmission of Image over DWT-OFDM System

In many applications retransmissions of lost packets are not permitted. OFDM is a multi-carrier modulation scheme having excellent performance which allows overlapping in frequency domain. With OFDM there is a simple way of dealing with multipath relatively simple DSP algorithms.  In this paper, an image frame is compressed using DWT, and the compressed data is arranged in data vectors, each with equal number of coefficients. These vectors are quantized and binary coded to get the bit steams, which are then packetized and intelligently mapped to the OFDM system. Based on one-bit channel state information at the transmitter, the descriptions in order of descending priority are assigned to the currently good channels such that poorer sub-channels can only affect the lesser important data vectors. We consider only one-bit channel state information available at the transmitter, informing only about the sub-channels to be good or bad. For a good sub-channel, instantaneous received power should be greater than a threshold Pth. Otherwise, the sub-channel is in fading state and considered bad for that batch of coefficients. In order to reduce the system power consumption, the mapped descriptions onto the bad sub channels are dropped at the transmitter. The binary channel state information gives an opportunity to map the bit streams intelligently and to save a reasonable amount of power. By using MAT LAB simulation we can analysis the performance of our proposed scheme, in terms of system energy saving without compromising the received quality in terms of peak signal-noise ratio.

Flow Characteristics and Heat Transfer Enhancement in 2D Corrugated Channels

Present study numerically investigates the flow field and heat transfer of water in two dimensional sinusoidal and rectangular corrugated wall channels. Simulations are performed for fully developed flow conditions at inlet sections of the channels that have 12 waves. The temperature of the input fluid is taken to be less than that temperature of wavy walls. The governing continuity, momentum and energy equations are numerically solved using finite volume method based on SIMPLE technique. The investigation covers Reynolds number in the rage of 100-1000. The effects of the distance between upper and lower corrugated walls are studied by varying Hmin/Hmax ratio from 0.3 to 0.5 for keeping wave length and wave amplitude values fixed for both geometries. The effects of the wall geometry, Reynolds number and the distance between walls on the flow characteristics, the local Nusselt number and heat transfer are studied. It is found that heat transfer enhancement increases by usage of corrugated horizontal walls in an appropriate Reynolds number regime and channel height.

Dynamic Safety-Stock Calculation

In order to ensure a high service level industrial enterprises have to maintain safety-stock that directly influences the economic efficiency at the same time. This paper analyses established mathematical methods to calculate safety-stock. Therefore, the performance measured in stock and service level is appraised and the limits of several methods are depicted. Afterwards, a new dynamic approach is presented to gain an extensive method to calculate safety-stock that also takes the knowledge of future volatility into account.

Kano’s Model for Clinical Laboratory

The clinical laboratory has received considerable recognition globally due to the rapid development of advanced technology, economic demands and its role in a patient’s treatment cycle. Although various cross-domain experiments and practices with respect to clinical laboratory projects are ready for the full swing, the customer needs are still ambiguous and debatable. The purpose of this study is to apply Kano’s model and customer satisfaction matrix to categorize service quality attributes in order to see how well these attributes are able to satisfy customer needs. The result reveals that ten of the 26 service quality attributes have greater impacts on highly increasing customer’s satisfaction and should be taken in consideration firstly.

Enhanced Gram-Schmidt Process for Improving the Stability in Signal and Image Processing

The Gram-Schmidt Process (GSP) is used to convert a non-orthogonal basis (a set of linearly independent vectors) into an orthonormal basis (a set of orthogonal, unit-length vectors). The process consists of taking each vector and then subtracting the elements in common with the previous vectors. This paper introduces an Enhanced version of the Gram-Schmidt Process (EGSP) with inverse, which is useful for signal and image processing applications.

A Robust Deterministic Energy Smart-Grid Decisional Algorithm for Agent-Based Management

This paper is concerning the application of a deterministic decisional pattern to a multi-agent system which would provide intelligence to a distributed energy smart grid at local consumer level. Development of multi-agent application involves agent specifications, analysis, design and realization. It can be implemented by following several decisional patterns. The purpose of present article is to suggest a new approach to control the smart grid system in a decentralized competitive approach. The proposed algorithmic solution results from a deterministic dichotomous approach based on environment observation. It uses an iterative process to solve automatic learning problems. Through memory of collected past tries, the algorithm monotonically converges to very steep system operation point in attraction basin resulting from weak system nonlinearity. In this sense, system is given by (local) constitutive elementary rules the intelligence of its global existence so that it can self-organize toward optimal operating sequence.