Parallel Block Backward Differentiation Formulas For Solving Large Systems of Ordinary Differential Equations

In this paper, parallelism in the solution of Ordinary Differential Equations (ODEs) to increase the computational speed is studied. The focus is the development of parallel algorithm of the two point Block Backward Differentiation Formulas (PBBDF) that can take advantage of the parallel architecture in computer technology. Parallelism is obtained by using Message Passing Interface (MPI). Numerical results are given to validate the efficiency of the PBBDF implementation as compared to the sequential implementation.

Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method

Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.

Emissions of Euro 3-5 Passenger Cars Measured Over Different Driving Cycles

The reduction in vehicle exhaust emissions achieved in the last two decades is offset by the growth in traffic, as well as by changes in the composition of emitted pollutants. The present investigation illustrates the emissions of in-use gasoline and diesel passenger cars using the official European driving cycle and the ARTEMIS real-world driving cycle. It was observed that some of the vehicles do not comply with the corresponding regulations. Significant differences in emissions were observed between driving cycles. Not all pollutants showed a tendency to decrease from Euro 3 to Euro 5.

Methodology of the Energy Supply Disturbances Affecting Energy System

Recently global concerns for the energy security have steadily been on the increase and are expected to become a major issue over the next few decades. Energy security refers to a resilient energy system. This resilient system would be capable of withstanding threats through a combination of active, direct security measures and passive or more indirect measures such as redundancy, duplication of critical equipment, diversity in fuel, other sources of energy, and reliance on less vulnerable infrastructure. Threats and disruptions (disturbances) to one part of the energy system affect another. The paper presents methodology in theoretical background about energy system as an interconnected network and energy supply disturbances impact to the network. The proposed methodology uses a network flow approach to develop mathematical model of the energy system network as the system of nodes and arcs with energy flowing from node to node along paths in the network.

Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Novel Method for Elliptic Curve Multi-Scalar Multiplication

The major building block of most elliptic curve cryptosystems are computation of multi-scalar multiplication. This paper proposes a novel algorithm for simultaneous multi-scalar multiplication, that is by employing addition chains. The previously known methods utilizes double-and-add algorithm with binary representations. In order to accomplish our purpose, an efficient empirical method for finding addition chains for multi-exponents has been proposed.

Biosynthesis and In vitro Studies of Silver Bionanoparticles Synthesized from Aspergillusspecies and its Antimicrobial Activity against Multi Drug Resistant Clinical Isolates

Antimicrobial resistant is becoming a major factor in virtually all hospital acquired infection may soon untreatable is a serious public health problem. These concerns have led to major research effort to discover alternative strategies for the treatment of bacterial infection. Nanobiotehnology is an upcoming and fast developing field with potential application for human welfare. An important area of nanotechnology for development of reliable and environmental friendly process for synthesis of nanoscale particles through biological systems In the present studies are reported on the use of fungal strain Aspergillus species for the extracellular synthesis of bionanoparticles from 1 mM silver nitrate (AgNO3) solution. The report would be focused on the synthesis of metallic bionanoparticles of silver using a reduction of aqueous Ag+ ion with the culture supernatants of Microorganisms. The bio-reduction of the Ag+ ions in the solution would be monitored in the aqueous component and the spectrum of the solution would measure through UV-visible spectrophotometer The bionanoscale particles were further characterized by Atomic Force Microscopy (AFM), Fourier Transform Infrared Spectroscopy (FTIR) and Thin layer chromatography. The synthesized bionanoscale particle showed a maximum absorption at 385 nm in the visible region. Atomic Force Microscopy investigation of silver bionanoparticles identified that they ranged in the size of 250 nm - 680 nm; the work analyzed the antimicrobial efficacy of the silver bionanoparticles against various multi drug resistant clinical isolates. The present Study would be emphasizing on the applicability to synthesize the metallic nanostructures and to understand the biochemical and molecular mechanism of nanoparticles formation by the cell filtrate in order to achieve better control over size and polydispersity of the nanoparticles. This would help to develop nanomedicine against various multi drug resistant human pathogens.

Chua’s Circuit Regulation Using a Nonlinear Adaptive Feedback Technique

Chua’s circuit is one of the most important electronic devices that are used for Chaos and Bifurcation studies. A central role of secure communication is devoted to it. Since the adaptive control is used vastly in the linear systems control, here we introduce a new trend of application of adaptive method in the chaos controlling field. In this paper, we try to derive a new adaptive control scheme for Chua’s circuit controlling because control of chaos is often very important in practical operations. The novelty of this approach is for sake of its robustness against the external perturbations which is simulated as an additive noise in all measured states and can be generalized to other chaotic systems. Our approach is based on Lyapunov analysis and the adaptation law is considered for the feedback gain. Because of this, we have named it NAFT (Nonlinear Adaptive Feedback Technique). At last, simulations show the capability of the presented technique for Chua’s circuit.

A Fitted Random Sampling Scheme for Load Distribution in Grid Networks

Grid networks provide the ability to perform higher throughput computing by taking advantage of many networked computer-s resources to solve large-scale computation problems. As the popularity of the Grid networks has increased, there is a need to efficiently distribute the load among the resources accessible on the network. In this paper, we present a stochastic network system that gives a distributed load-balancing scheme by generating almost regular networks. This network system is self-organized and depends only on local information for load distribution and resource discovery. The in-degree of each node is refers to its free resources, and job assignment and resource discovery processes required for load balancing is accomplished by using fitted random sampling. Simulation results show that the generated network system provides an effective, scalable, and reliable load-balancing scheme for the distributed resources accessible on Grid networks.

Comparison of Frequency Converter Outages: A Case Study on the Swedish TPS System

The purpose of this paper isunavailability of the two main types of conveSwedish traction power supply (TPS) system, i.e.static converter. The number of outages and the ouused to analyze and compare the unavailability oconverters. The mean cumulative function (MCF)analyze the number of outages and the unavailabthe forced outage rate (FOR) concept has been uoutage rates. The study shows that the outagesfailure occur at a constant rate by calendar timconverter stations, while very few stations havedecreasing rate. It has also been found that the stata higher number of outages and a higher outage ratcompared to the rotary converter types. The resultsthat combining the number of outages and the fgives a better view of the converters performasupport for the maintenance decision. In fact, usingdoes not reflect reality. Comparing these two indein identifying the areas where extra resources are maintenance planning and where improvementsoutage in the TPS system.KeywordsFrequency Converter, Forced OuCumulative Function, Traction Power Supply, ESystems.

Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images

We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.

Dynamic Adaptability Using Reflexivity for Mobile Agent Protection

The paradigm of mobile agent provides a promising technology for the development of distributed and open applications. However, one of the main obstacles to widespread adoption of the mobile agent paradigm seems to be security. This paper treats the security of the mobile agent against malicious host attacks. It describes generic mobile agent protection architecture. The proposed approach is based on the dynamic adaptability and adopts the reflexivity as a model of conception and implantation. In order to protect it against behaviour analysis attempts, the suggested approach supplies the mobile agent with a flexibility faculty allowing it to present an unexpected behaviour. Furthermore, some classical protective mechanisms are used to reinforce the level of security.

Mobile Multicast Support using Old Foreign Agent (MMOFA)

IP multicasting is a key technology for many existing and emerging applications on the Internet. Furthermore, with increasing popularity of wireless devices and mobile equipment, it is necessary to determine the best way to provide this service in a wireless environment. IETF Mobile IP, that provides mobility for hosts in IP networks, proposes two approaches for mobile multicasting, namely, remote subscription (MIP-RS) and bi-directional tunneling (MIP-BT). In MIP-RS, a mobile host re-subscribes to the multicast groups each time it moves to a new foreign network. MIP-RS suffers from serious packet losses while mobile host handoff occurs. In MIP-BT, mobile hosts send and receive multicast packets by way of their home agents (HAs), using Mobile IP tunnels. Therefore, it suffers from inefficient routing and wastage of system resources. In this paper, we propose a protocol called Mobile Multicast support using Old Foreign Agent (MMOFA) for Mobile Hosts. MMOFA is derived from MIP-RS and with the assistance of Mobile host's Old foreign agent, routes the missing datagrams due to handoff in adjacent network via tunneling. Also, we studied the performance of the proposed protocol by simulation under ns-2.27. The results demonstrate that MMOFA has optimal routing efficiency and low delivery cost, as compared to other approaches.

A Critical Survey of Reusability Aspects for Component-Based Systems

The last decade has shown that object-oriented concept by itself is not that powerful to cope with the rapidly changing requirements of ongoing applications. Component-based systems achieve flexibility by clearly separating the stable parts of systems (i.e. the components) from the specification of their composition. In order to realize the reuse of components effectively in CBSD, it is required to measure the reusability of components. However, due to the black-box nature of components where the source code of these components are not available, it is difficult to use conventional metrics in Component-based Development as these metrics require analysis of source codes. In this paper, we survey few existing component-based reusability metrics. These metrics give a border view of component-s understandability, adaptability, and portability. It also describes the analysis, in terms of quality factors related to reusability, contained in an approach that aids significantly in assessing existing components for reusability.

Frequency and Amplitude Measurement of a Vibrating Object in Water Using Ultrasonic Speckle Technique

The principle of frequency and amplitude measurement of a vibrating object in water using ultrasonic speckle technique is presented in this paper. Compared with other traditional techniques, the ultrasonic speckle technique can be applied to vibration measurement of a nonmetal object with rough surface in water in a noncontact way. The relationship between speckle movement and object movement was analyzed. Based on this study, an ultrasonic speckle measurement system was set up. With this system the frequency and amplitude of an underwater vibrating cantilever beam was detected. The result shows that the experimental data is in good agreement with the calibrating data.

Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

Screening Wheat Parents of Mapping Population for Heat and Drought Tolerance, Detection of Wheat Genetic Variation

To evaluate genetic variation of wheat (Triticum aestivum) affected by heat and drought stress on eight Australian wheat genotypes that are parents of Doubled Haploid (HD) mapping populations at the vegetative stage, the water stress experiment was conducted at 65% field capacity in growth room. Heat stress experiment was conducted in the research field under irrigation over summer. Result show that water stress decreased dry shoot weight and RWC but increased osmolarity and means of Fv/Fm values in all varieties except for Krichauff. Krichauff and Kukri had the maximum RWC under drought stress. Trident variety was shown maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield and Fv/Fm 0.815 under water stress condition. However, the recovery of quantum yield was apparent between 4 to 7 days after stress in all varieties. Nevertheless, increase in water stress after that lead to strong decrease in quantum yield. There was a genetic variation for leaf pigments content among varieties under heat stress. Heat stress decreased significantly the total chlorophyll content that measured by SPAD. Krichauff had maximum value of Anthocyanin content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b (0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content belonged to Kukri. The quantum yield of all varieties decreased significantly, when the weather temperature increased from 28 ÔùªC to 36 ÔùªC during the 6 days. However, the recovery of quantum yield was apparent after 8th day in all varieties. The maximum decrease and recovery in quantum yield was observed in Krichauff. Drought and heat tolerant and moderately tolerant wheat genotypes were included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut and Excalibur were clustered into most sensitive and moderately sensitive genotypes. Finally, the results show that there was a significantly genetic variation among the eight varieties that were studied under heat and water stress.

A Novel Machining Signal Filtering Technique: Z-notch Filter

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Human Verification in a Video Surveillance System Using Statistical Features

A human verification system is presented in this paper. The system consists of several steps: background subtraction, thresholding, line connection, region growing, morphlogy, star skelatonization, feature extraction, feature matching, and decision making. The proposed system combines an advantage of star skeletonization and simple statistic features. A correlation matching and probability voting have been used for verification, followed by a logical operation in a decision making stage. The proposed system uses small number of features and the system reliability is convincing.