Bi-Criteria Latency Optimization of Intra-and Inter-Autonomous System Traffic Engineering

Traffic Engineering (TE) is the process of controlling how traffic flows through a network in order to facilitate efficient and reliable network operations while simultaneously optimizing network resource utilization and traffic performance. TE improves the management of data traffic within a network and provides the better utilization of network resources. Many research works considers intra and inter Traffic Engineering separately. But in reality one influences the other. Hence the effective network performances of both inter and intra Autonomous Systems (AS) are not optimized properly. To achieve a better Joint Optimization of both Intra and Inter AS TE, we propose a joint Optimization technique by considering intra-AS features during inter – AS TE and vice versa. This work considers the important criterion say latency within an AS and between ASes. and proposes a Bi-Criteria Latency optimization model. Hence an overall network performance can be improved by considering this jointoptimization technique in terms of Latency.

A Hybrid Approach for Color Image Quantization Using K-means and Firefly Algorithms

Color Image quantization (CQ) is an important problem in computer graphics, image and processing. The aim of quantization is to reduce colors in an image with minimum distortion. Clustering is a widely used technique for color quantization; all colors in an image are grouped to small clusters. In this paper, we proposed a new hybrid approach for color quantization using firefly algorithm (FA) and K-means algorithm. Firefly algorithm is a swarmbased algorithm that can be used for solving optimization problems. The proposed method can overcome the drawbacks of both algorithms such as the local optima converge problem in K-means and the early converge of firefly algorithm. Experiments on three commonly used images and the comparison results shows that the proposed algorithm surpasses both the base-line technique k-means clustering and original firefly algorithm.

A Multi Objective Optimization Approach to Optimize Vehicle Ride and Handling Characteristics

Vehicle suspension design must fulfill some conflicting criteria. Among those is ride comfort which is attained by minimizing the acceleration transmitted to the sprung mass, via suspension spring and damper. Also good handling of a vehicle is a desirable property which requires stiff suspension and therefore is in contrast with a vehicle with good ride. Among the other desirable features of a suspension is the minimization of the maximum travel of suspension. This travel which is called suspension working space in vehicle dynamics literature is also a design constraint and it favors good ride. In this research a full car 8 degrees of freedom model has been developed and the three above mentioned criteria, namely: ride, handling and working space has been adopted as objective functions. The Multi Objective Programming (MOP) discipline has been used to find the Pareto Front and some reasoning used to chose a design point between these non dominated points of Pareto Front.

Software Reliability Prediction Model Analysis

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

An Exhaustive Review of Die Sinking Electrical Discharge Machining Process and Scope for Future Research

Electrical Discharge Machine (EDM) is especially used for the manufacturing of 3-D complex geometry and hard material parts that are extremely difficult-to-machine by conventional machining processes. In this paper authors review the research work carried out in the development of die-sinking EDM within the past decades for the improvement of machining characteristics such as Material Removal Rate, Surface Roughness and Tool Wear Ratio. In this review various techniques reported by EDM researchers for improving the machining characteristics have been categorized as process parameters optimization, multi spark technique, powder mixed EDM, servo control system and pulse discriminating. At the end, flexible machine controller is suggested for Die Sinking EDM to enhance the machining characteristics and to achieve high-level automation. Thus, die sinking EDM can be integrated with Computer Integrated Manufacturing environment as a need of agile manufacturing systems.

Techniques for Reliability Evaluation in Distribution System Planning

This paper presents reliability evaluation techniques which are applied in distribution system planning studies and operation. Reliability of distribution systems is an important issue in power engineering for both utilities and customers. Reliability is a key issue in the design and operation of electric power distribution systems and load. Reliability evaluation of distribution systems has been the subject of many recent papers and the modeling and evaluation techniques have improved considerably.

STRPRO Tool for Manipulation of Stratified Programs Based on SEPN

Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. SEPN nets are well adapted extension of predicate nets for the definition and manipulation of stratified programs. This formalism is characterized by two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimization (maximal stratification, incremental updates ...). We propose, in this paper, useful algorithms for manipulating stratified programs using SEPN. These algorithms were implemented and validated with STRPRO tool.

Feature Based Dense Stereo Matching using Dynamic Programming and Color

This paper presents a new feature based dense stereo matching algorithm to obtain the dense disparity map via dynamic programming. After extraction of some proper features, we use some matching constraints such as epipolar line, disparity limit, ordering and limit of directional derivative of disparity as well. Also, a coarseto- fine multiresolution strategy is used to decrease the search space and therefore increase the accuracy and processing speed. The proposed method links the detected feature points into the chains and compares some of the feature points from different chains, to increase the matching speed. We also employ color stereo matching to increase the accuracy of the algorithm. Then after feature matching, we use the dynamic programming to obtain the dense disparity map. It differs from the classical DP methods in the stereo vision, since it employs sparse disparity map obtained from the feature based matching stage. The DP is also performed further on a scan line, between any matched two feature points on that scan line. Thus our algorithm is truly an optimization method. Our algorithm offers a good trade off in terms of accuracy and computational efficiency. Regarding the results of our experiments, the proposed algorithm increases the accuracy from 20 to 70%, and reduces the running time of the algorithm almost 70%.

On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining

One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.

A Hybrid Multi Objective Algorithm for Flexible Job Shop Scheduling

Scheduling for the flexible job shop is very important in both fields of production management and combinatorial optimization. However, it quit difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. The combining of several optimization criteria induces additional complexity and new problems. In this paper, a Pareto approach to solve the multi objective flexible job shop scheduling problems is proposed. The objectives considered are to minimize the overall completion time (makespan) and total weighted tardiness (TWT). An effective simulated annealing algorithm based on the proposed approach is presented to solve multi objective flexible job shop scheduling problem. An external memory of non-dominated solutions is considered to save and update the non-dominated solutions during the solution process. Numerical examples are used to evaluate and study the performance of the proposed algorithm. The proposed algorithm can be applied easily in real factory conditions and for large size problems. It should thus be useful to both practitioners and researchers.

Machining of FRP Composites by Abrasive Jet Machining Optimization Using Taguchi

Abrasive Jet Machining is an Unconventional machining process in which the metal is removed from brittle and hard material in the form of micro-chips. With increase in need of materials like ceramics, composites, in manufacturing of various Mechanical & Electronic components, AJM has become a useful technique for micro machining. The present study highlights the influence of different parameters like Pressure, SOD, Time, Abrasive grain size, nozzle diameter on the Metal removal of FRP (Fiber Reinforced Polymer) composite by Abrasive jet machining. The results of the Experiments conducted were analyzed and optimized with TAGUCHI method of Optimization and ANOVA for Optimal Value.

Strength Optimization of Induction Hardened Splined Shaft – Material and Geometric Aspects

the current study presents a modeling framework to determine the torsion strength of an induction hardened splined shaft by considering geometry and material aspects with the aim to optimize the static torsion strength by selection of spline geometry and hardness depth. Six different spline geometries and seven different hardness profiles including non-hardened and throughhardened shafts have been considered. The results reveal that the torque that causes initial yielding of the induction hardened splined shaft is strongly dependent on the hardness depth and the geometry of the spline teeth. Guidelines for selection of the appropriate hardness depth and spline geometry are given such that an optimum static torsion strength of the component can be achieved.

Process Optimization for Enhanced Production of Cell Biomass and Metabolites of Fluorescent Pseudomonad R81

The fluorescent pseudomonad strain R81 is a root colonizing rhizobacteria which promotes the growth of many plants by various mechanisms. Its broth containing siderophore (ironchelating compound) and 2,4- diacetyl phloroglucinol (DAPG) is used for preparing bioinoculant formulations for agronomical applications. Glycerol was found to be the best carbon source for improved biomass production. Splitting of nitrogen source to NH4Cl and urea had a stabilizing effect on pH during batch cultivation. Ltryptophan at 0.5 % in the medium increased the siderophore production to 850 mg/l. During batch cultivation of the strain in a bioreactor, a maximum of 4 g/l of dry cell mass, 1.8 g/l of siderophore and 20 mg/l of DAPG was achieved when glycerol was 15 g/l and C/N ratio was maintained at 12.5. In case of intermittent feeding of fresh medium during fed-batch cultivation, the dry cell mass was increased to 25 g/l with improved production of DAPG to 70 mg/l.

Blind Identification of MA Models Using Cumulants

In this paper, many techniques for blind identification of moving average (MA) process are presented. These methods utilize third- and fourth-order cumulants of the noisy observations of the system output. The system is driven by an independent and identically distributed (i.i.d) non-Gaussian sequence that is not observed. Two nonlinear optimization algorithms, namely the Gradient Descent and the Gauss-Newton algorithms are exposed. An algorithm based on the joint-diagonalization of the fourth-order cumulant matrices (FOSI) is also considered, as well as an improved version of the classical C(q, 0, k) algorithm based on the choice of the Best 1-D Slice of fourth-order cumulants. To illustrate the effectiveness of our methods, various simulation examples are presented.

Optimal SSSC Placement to ATC Enhancing in Power Systems

This paper reviews the optimization available transmission capability (ATC) of power systems using a device of FACTS named SSSC equipped with energy storage devices. So that, emplacement and improvement of parameters of SSSC will be illustrated. Thus, voltage magnitude constraints of network buses, line transient stability constraints and voltage breakdown constraints are considered. To help the calculations, a comprehensive program in DELPHI is provided, which is able to simulate and trace the parameters of SSSC has been installed on a specific line. Furthermore, the provided program is able to compute ATC, TTC and maximum value of their enhancement after using SSSC.

Damping Power System Oscillations Improvement by FACTS Devices: A Comparison between SSSC and STATCOM

The main objective of this paper is a comparative investigate in enhancement of damping power system oscillation via coordinated design of the power system stabilizer (PSS) and static synchronous series compensator (SSSC) and static synchronous compensator (STATCOM). The design problem of FACTS-based stabilizers is formulated as a GA based optimization problem. In this paper eigenvalue analysis method is used on small signal stability of single machine infinite bus (SMIB) system installed with SSSC and STATCOM. The generator is equipped with a PSS. The proposed stabilizers are tested on a weakly connected power system with different disturbances and loading conditions. This aim is to enhance both rotor angle and power system stability. The eigenvalue analysis and non-linear simulation results are presented to show the effects of these FACTS-based stabilizers and reveal that SSSC exhibits the best effectiveness on damping power system oscillation.

An Integrated Design Evaluation and Assembly Sequence Planning Model using a Particle Swarm Optimization Approach

In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.

Optimized Facial Features-based Age Classification

The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.

Optimal Capacitor Placement in Distribution Feeders

Optimal capacitor allocation in distribution systems has been studied for a long times. It is an optimization problem which has an objective to define the optimal sizes and locations of capacitors to be installed. In this works, an overview of capacitor placement problem in distribution systems is briefly introduced. The objective functions and constraints of the problem are listed and the methodologies for solving the problem are summarized.

Capacity Optimization for Local and Cooperative Spectrum Sensing in Cognitive Radio Networks

The dynamic spectrum allocation solutions such as cognitive radio networks have been proposed as a key technology to exploit the frequency segments that are spectrally underutilized. Cognitive radio users work as secondary users who need to constantly and rapidly sense the presence of primary users or licensees to utilize their frequency bands if they are inactive. Short sensing cycles should be run by the secondary users to achieve higher throughput rates as well as to provide low level of interference to the primary users by immediately vacating their channels once they have been detected. In this paper, the throughput-sensing time relationship in local and cooperative spectrum sensing has been investigated under two distinct scenarios, namely, constant primary user protection (CPUP) and constant secondary user spectrum usability (CSUSU) scenarios. The simulation results show that the design of sensing slot duration is very critical and depends on the number of cooperating users under CPUP scenario whereas under CSUSU, cooperating more users has no effect if the sensing time used exceeds 5% of the total frame duration.