Optimum Time Coordination of Overcurrent Relays using Two Phase Simplex Method

Overcurrent (OC) relays are the major protection devices in a distribution system. The operating time of the OC relays are to be coordinated properly to avoid the mal-operation of the backup relays. The OC relay time coordination in ring fed distribution networks is a highly constrained optimization problem which can be stated as a linear programming problem (LPP). The purpose is to find an optimum relay setting to minimize the time of operation of relays and at the same time, to keep the relays properly coordinated to avoid the mal-operation of relays. This paper presents two phase simplex method for optimum time coordination of OC relays. The method is based on the simplex algorithm which is used to find optimum solution of LPP. The method introduces artificial variables to get an initial basic feasible solution (IBFS). Artificial variables are removed using iterative process of first phase which minimizes the auxiliary objective function. The second phase minimizes the original objective function and gives the optimum time coordination of OC relays.

Methodology of the Energy Supply Disturbances Affecting Energy System

Recently global concerns for the energy security have steadily been on the increase and are expected to become a major issue over the next few decades. Energy security refers to a resilient energy system. This resilient system would be capable of withstanding threats through a combination of active, direct security measures and passive or more indirect measures such as redundancy, duplication of critical equipment, diversity in fuel, other sources of energy, and reliance on less vulnerable infrastructure. Threats and disruptions (disturbances) to one part of the energy system affect another. The paper presents methodology in theoretical background about energy system as an interconnected network and energy supply disturbances impact to the network. The proposed methodology uses a network flow approach to develop mathematical model of the energy system network as the system of nodes and arcs with energy flowing from node to node along paths in the network.

Novel Method for Elliptic Curve Multi-Scalar Multiplication

The major building block of most elliptic curve cryptosystems are computation of multi-scalar multiplication. This paper proposes a novel algorithm for simultaneous multi-scalar multiplication, that is by employing addition chains. The previously known methods utilizes double-and-add algorithm with binary representations. In order to accomplish our purpose, an efficient empirical method for finding addition chains for multi-exponents has been proposed.

Profile Controlled Gold Nanostructures Fabricated by Nanosphere Lithography for Localized Surface Plasmon Resonance

Localized surface plasmon resonance (LSPR) is the coherent oscillation of conductive electrons confined in noble metallic nanoparticles excited by electromagnetic radiation, and nanosphere lithography (NSL) is one of the cost-effective methods to fabricate metal nanostructures for LSPR. NSL can be categorized into two major groups: dispersed NSL and closely pack NSL. In recent years, gold nanocrescents and gold nanoholes with vertical sidewalls fabricated by dispersed NSL, and silver nanotriangles and gold nanocaps on silica nanospheres fabricated by closely pack NSL, have been reported for LSPR biosensing. This paper introduces several novel gold nanostructures fabricated by NSL in LSPR applications, including 3D nanostructures obtained by evaporating gold obliquely on dispersed nanospheres, nanoholes with slant sidewalls, and patchy nanoparticles on closely packed nanospheres, all of which render satisfactory sensitivity for LSPR sensing. Since the LSPR spectrum is very sensitive to the shape of the metal nanostructures, formulas are derived and software is developed for calculating the profiles of the obtainable metal nanostructures by NSL, for different nanosphere masks with different fabrication conditions. The simulated profiles coincide well with the profiles of the fabricated gold nanostructures observed under scanning electron microscope (SEM) and atomic force microscope (AFM), which proves that the software is a useful tool for the process design of different LSPR nanostructures.

Chua’s Circuit Regulation Using a Nonlinear Adaptive Feedback Technique

Chua’s circuit is one of the most important electronic devices that are used for Chaos and Bifurcation studies. A central role of secure communication is devoted to it. Since the adaptive control is used vastly in the linear systems control, here we introduce a new trend of application of adaptive method in the chaos controlling field. In this paper, we try to derive a new adaptive control scheme for Chua’s circuit controlling because control of chaos is often very important in practical operations. The novelty of this approach is for sake of its robustness against the external perturbations which is simulated as an additive noise in all measured states and can be generalized to other chaotic systems. Our approach is based on Lyapunov analysis and the adaptation law is considered for the feedback gain. Because of this, we have named it NAFT (Nonlinear Adaptive Feedback Technique). At last, simulations show the capability of the presented technique for Chua’s circuit.

Application of Artificial Neural Network for Predicting Maintainability Using Object-Oriented Metrics

Importance of software quality is increasing leading to development of new sophisticated techniques, which can be used in constructing models for predicting quality attributes. One such technique is Artificial Neural Network (ANN). This paper examined the application of ANN for software quality prediction using Object- Oriented (OO) metrics. Quality estimation includes estimating maintainability of software. The dependent variable in our study was maintenance effort. The independent variables were principal components of eight OO metrics. The results showed that the Mean Absolute Relative Error (MARE) was 0.265 of ANN model. Thus we found that ANN method was useful in constructing software quality model.

An Approach to Solving a Permutation Problem of Frequency Domain Independent Component Analysis for Blind Source Separation of Speech Signals

Independent component analysis (ICA) in the frequency domain is used for solving the problem of blind source separation (BSS). However, this method has some problems. For example, a general ICA algorithm cannot determine the permutation of signals which is important in the frequency domain ICA. In this paper, we propose an approach to the solution for a permutation problem. The idea is to effectively combine two conventional approaches. This approach improves the signal separation performance by exploiting features of the conventional approaches. We show the simulation results using artificial data.

Automatic 3D Reconstruction of Coronary Artery Centerlines from Monoplane X-ray Angiogram Images

We present a new method for the fully automatic 3D reconstruction of the coronary artery centerlines, using two X-ray angiogram projection images from a single rotating monoplane acquisition system. During the first stage, the input images are smoothed using curve evolution techniques. Next, a simple yet efficient multiscale method, based on the information of the Hessian matrix, for the enhancement of the vascular structure is introduced. Hysteresis thresholding using different image quantiles, is used to threshold the arteries. This stage is followed by a thinning procedure to extract the centerlines. The resulting skeleton image is then pruned using morphological and pattern recognition techniques to remove non-vessel like structures. Finally, edge-based stereo correspondence is solved using a parallel evolutionary optimization method based on f symbiosis. The detected 2D centerlines combined with disparity map information allow the reconstruction of the 3D vessel centerlines. The proposed method has been evaluated on patient data sets for evaluation purposes.

Approximate Solution of Nonlinear Fredholm Integral Equations of the First Kind via Converting to Optimization Problems

In this paper we introduce an approach via optimization methods to find approximate solutions for nonlinear Fredholm integral equations of the first kind. To this purpose, we consider two stages of approximation. First we convert the integral equation to a moment problem and then we modify the new problem to two classes of optimization problems, non-constraint optimization problems and optimal control problems. Finally numerical examples is proposed.

Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

MDA of Hexagonal Honeycomb Plates used for Space Applications

The purpose of this paper is to perform a multidisciplinary design and analysis (MDA) of honeycomb panels used in the satellites structural design. All the analysis is based on clamped-free boundary conditions. In the present work, detailed finite element models for honeycomb panels are developed and analysed. Experimental tests were carried out on a honeycomb specimen of which the goal is to compare the previous modal analysis made by the finite element method as well as the existing equivalent approaches. The obtained results show a good agreement between the finite element analysis, equivalent and tests results; the difference in the first two frequencies is less than 4% and less than 10% for the third frequency. The results of the equivalent model presented in this analysis are obtained with a good accuracy. Moreover, investigations carried out in this research relate to the honeycomb plate modal analysis under several aspects including the structural geometrical variation by studying the various influences of the dimension parameters on the modal frequency, the variation of core and skin material of the honeycomb. The various results obtained in this paper are promising and show that the geometry parameters and the type of material have an effect on the value of the honeycomb plate modal frequency.

Automatic Segmentation of Dermoscopy Images Using Histogram Thresholding on Optimal Color Channels

Automatic segmentation of skin lesions is the first step towards development of a computer-aided diagnosis of melanoma. Although numerous segmentation methods have been developed, few studies have focused on determining the most discriminative and effective color space for melanoma application. This paper proposes a novel automatic segmentation algorithm using color space analysis and clustering-based histogram thresholding, which is able to determine the optimal color channel for segmentation of skin lesions. To demonstrate the validity of the algorithm, it is tested on a set of 30 high resolution dermoscopy images and a comprehensive evaluation of the results is provided, where borders manually drawn by four dermatologists, are compared to automated borders detected by the proposed algorithm. The evaluation is carried out by applying three previously used metrics of accuracy, sensitivity, and specificity and a new metric of similarity. Through ROC analysis and ranking the metrics, it is shown that the best results are obtained with the X and XoYoR color channels which results in an accuracy of approximately 97%. The proposed method is also compared with two state-ofthe- art skin lesion segmentation methods, which demonstrates the effectiveness and superiority of the proposed segmentation method.

A Novel Machining Signal Filtering Technique: Z-notch Filter

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

A Computationally Efficient Design for Prototype Filters of an M-Channel Cosine Modulated Filter Bank

The paper discusses a computationally efficient method for the design of prototype filters required for the implementation of an M-band cosine modulated filter bank. The prototype filter is formulated as an optimum interpolated FIR filter. The optimum interpolation factor requiring minimum number of multipliers is used. The model filter as well as the image suppressor will be designed using the Kaiser window. The method will seek to optimize a single parameter namely cutoff frequency to minimize the distortion in the overlapping passband.

Identification of Wideband Sources Using Higher Order Statistics in Noisy Environment

This paper deals with the localization of the wideband sources. We develop a new approach for estimating the wide band sources parameters. This method is based on the high order statistics of the recorded data in order to eliminate the Gaussian components from the signals received on the various hydrophones.In fact the noise of sea bottom is regarded as being Gaussian. Thanks to the coherent signal subspace algorithm based on the cumulant matrix of the received data instead of the cross-spectral matrix the wideband correlated sources are perfectly located in the very noisy environment. We demonstrate the performance of the proposed algorithm on the real data recorded during an underwater acoustics experiments.

A Sub-Pixel Image Registration Technique with Applications to Defect Detection

This paper presents a useful sub-pixel image registration method using line segments and a sub-pixel edge detector. In this approach, straight line segments are first extracted from gray images at the pixel level before applying the sub-pixel edge detector. Next, all sub-pixel line edges are mapped onto the orientation-distance parameter space to solve for line correspondence between images. Finally, the registration parameters with sub-pixel accuracy are analytically solved via two linear least-square problems. The present approach can be applied to various fields where fast registration with sub-pixel accuracy is required. To illustrate, the present approach is applied to the inspection of printed circuits on a flat panel. Numerical example shows that the present approach is effective and accurate when target images contain a sufficient number of line segments, which is true in many industrial problems.

Analytical Model Prediction: Micro-Cutting Tool Forces with the Effect of Friction on Machining Titanium Alloy (Ti-6Al-4V)

In this paper, a methodology of a model based on predicting the tool forces oblique machining are introduced by adopting the orthogonal technique. The applied analytical calculation is mostly based on Devries model and some parts of the methodology are employed from Amareggo-Brown model. Model validation is performed by comparing experimental data with the prediction results on machining titanium alloy (Ti-6Al-4V) based on micro-cutting tool perspective. Good agreements with the experiments are observed. A detailed friction form that affected the tool forces also been examined with reasonable results obtained.

Genetic Algorithm Parameters Optimization for Bi-Criteria Multiprocessor Task Scheduling Using Design of Experiments

Multiprocessor task scheduling is a NP-hard problem and Genetic Algorithm (GA) has been revealed as an excellent technique for finding an optimal solution. In the past, several methods have been considered for the solution of this problem based on GAs. But, all these methods consider single criteria and in the present work, minimization of the bi-criteria multiprocessor task scheduling problem has been considered which includes weighted sum of makespan & total completion time. Efficiency and effectiveness of genetic algorithm can be achieved by optimization of its different parameters such as crossover, mutation, crossover probability, selection function etc. The effects of GA parameters on minimization of bi-criteria fitness function and subsequent setting of parameters have been accomplished by central composite design (CCD) approach of response surface methodology (RSM) of Design of Experiments. The experiments have been performed with different levels of GA parameters and analysis of variance has been performed for significant parameters for minimisation of makespan and total completion time simultaneously.

Drafting the Design and Development of Micro- Controller Based Portable Soil Moisture Sensor for Advancement in Agro Engineering

Moisture is an important consideration in many aspects ranging from irrigation, soil chemistry, golf course, corrosion and erosion, road conditions, weather predictions, livestock feed moisture levels, water seepage etc. Vegetation and crops always depend more on the moisture available at the root level than on precipitation occurrence. In this paper, design of an instrument is discussed which tells about the variation in the moisture contents of soil. This is done by measuring the amount of water content in soil by finding the variation in capacitance of soil with the help of a capacitive sensor. The greatest advantage of soil moisture sensor is reduced water consumption. The sensor is also be used to set lower and upper threshold to maintain optimum soil moisture saturation and minimize water wilting, contributes to deeper plant root growth ,reduced soil run off /leaching and less favorable condition for insects and fungal diseases. Capacitance method is preferred because, it provides absolute amount of water content and also measures water content at any depth.

Dynamic Network Routing Method Based on Chromosome Learning

In this paper, we probe into the traffic assignment problem by the chromosome-learning-based path finding method in simulation, which is to model the driver' behavior in the with-in-a-day process. By simply making a combination and a change of the traffic route chromosomes, the driver at the intersection chooses his next route. The various crossover and mutation rules are proposed with extensive examples.