Optimization Using Simulation of the Vehicle Routing Problem

A key element of many distribution systems is the routing and scheduling of vehicles servicing a set of customers. A wide variety of exact and approximate algorithms have been proposed for solving the vehicle routing problems (VRP). Exact algorithms can only solve relatively small problems of VRP, which is classified as NP-Hard. Several approximate algorithms have proven successful in finding a feasible solution not necessarily optimum. Although different parts of the problem are stochastic in nature; yet, limited work relevant to the application of discrete event system simulation has addressed the problem. Presented here is optimization using simulation of VRP; where, a simplified problem has been developed in the ExtendSimTM simulation environment; where, ExtendSimTM evolutionary optimizer is used to minimize the total transportation cost of the problem. Results obtained from the model are very satisfactory. Further complexities of the problem are proposed for consideration in the future.

Stochastic Programming Model for Power Generation

We consider power system expansion planning under uncertainty. In our approach, integer programming and stochastic programming provide a basic framework. We develop a multistage stochastic programming model in which some of the variables are restricted to integer values. By utilizing the special property of the problem, called block separable recourse, the problem is transformed into a two-stage stochastic program with recourse. The electric power capacity expansion problem is reformulated as the problem with first stage integer variables and continuous second stage variables. The L-shaped algorithm to solve the problem is proposed.

Inventory Control for a Joint Replenishment Problem with Stochastic Demand

Most papers model Joint Replenishment Problem (JRP) as a (kT,S) where kT is a multiple value for a common review period T,and S is a predefined order up to level. In general the (T,S) policy is characterized by a long out of control period which requires a large amount of safety stock compared to the (R,Q) policy. In this paper a probabilistic model is built where an item, call it item(i), with the shortest order time between interval (T)is modeled under (R,Q) policy and its inventory is continuously reviewed, while the rest of items (j) are periodically reviewed at a definite time corresponding to item

An Enhanced Situational Awareness of AUV's Mission by Multirate Neural Control

This paper focuses on a critical component of the situational awareness (SA), the neural control of depth flight of an autonomous underwater vehicle (AUV). Constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. With the SA strategy, we proposed a multirate neural control of an AUV trajectory using neural network model reference controller for a nontrivial mid-small size AUV "r2D4" stochastic model. This control system has been demonstrated and evaluated by simulation of diving maneuvers using software package Simulink. From the simulation results it can be seen that the chosen AUV model is stable in the presence of high noise, and also can be concluded that the fast SA of similar AUV systems with economy in energy of batteries can be asserted during the underwater missions in search-and-rescue operations.

A Multi-Radio Multi-Channel Unification Power Control for Wireless Mesh Networks

Multi-Radio Multi-Channel Wireless Mesh Networks (MRMC-WMNs) operate at the backbone to access and route high volumes of traffic simultaneously. Such roles demand high network capacity, and long “online" time at the expense of accelerated transmission energy depletion and poor connectivity. This is the problem of transmission power control. Numerous power control methods for wireless networks are in literature. However, contributions towards MRMC configurations still face many challenges worth considering. In this paper, an energy-efficient power selection protocol called PMMUP is suggested at the Link-Layer. This protocol first divides the MRMC-WMN into a set of unified channel graphs (UCGs). A UCG consists of multiple radios interconnected to each other via a common wireless channel. In each UCG, a stochastic linear quadratic cost function is formulated. Each user minimizes this cost function consisting of trade-off between the size of unification states and the control action. Unification state variables come from independent UCGs and higher layers of the protocol stack. The PMMUP coordinates power optimizations at the network interface cards (NICs) of wireless mesh routers. The proposed PMMUP based algorithm converges fast analytically with a linear rate. Performance evaluations through simulations confirm the efficacy of the proposed dynamic power control.

Trajectory-Based Modified Policy Iteration

This paper presents a new problem solving approach that is able to generate optimal policy solution for finite-state stochastic sequential decision-making problems with high data efficiency. The proposed algorithm iteratively builds and improves an approximate Markov Decision Process (MDP) model along with cost-to-go value approximates by generating finite length trajectories through the state-space. The approach creates a synergy between an approximate evolving model and approximate cost-to-go values to produce a sequence of improving policies finally converging to the optimal policy through an intelligent and structured search of the policy space. The approach modifies the policy update step of the policy iteration so as to result in a speedy and stable convergence to the optimal policy. We apply the algorithm to a non-holonomic mobile robot control problem and compare its performance with other Reinforcement Learning (RL) approaches, e.g., a) Q-learning, b) Watkins Q(λ), c) SARSA(λ).

An Engineering Approach to Forecast Volatility of Financial Indices

By systematically applying different engineering methods, difficult financial problems become approachable. Using a combination of theory and techniques such as wavelet transform, time series data mining, Markov chain based discrete stochastic optimization, and evolutionary algorithms, this work formulated a strategy to characterize and forecast non-linear time series. It attempted to extract typical features from the volatility data sets of S&P100 and S&P500 indices that include abrupt drops, jumps and other non-linearity. As a result, accuracy of forecasting has reached an average of over 75% surpassing any other publicly available results on the forecast of any financial index.

Spurious Crests in Second-Order Waves

Occurrences of spurious crests on the troughs of large, relatively steep second-order Stokes waves are anomalous and not an inherent characteristic of real waves. Here, the effects of such occurrences on the statistics described by the standard second-order stochastic model are examined theoretically and by way of simulations. Theoretical results and simulations indicate that when spurious occurrences are sufficiently large, the standard model leads to physically unrealistic surface features and inaccuracies in the statistics of various surface features, in particular, the troughs and thus zero-crossing heights of large waves. Whereas inaccuracies can be fairly noticeable for long-crested waves in both deep and shallower depths, they tend to become relatively insignificant in directional waves.

A Stochastic Approach of Mitochondrial Dynamics

Mitochondria are dynamic organelles, capable to interact with each other. While the number of mitochondria in a cell varies, their quality and functionality depends on the operation of fusion, fission, motility and mitophagy. Nowadays, several researches declare as an important factor in neurogenerative diseases the disruptions in the regulation of mitochondrial dynamics. In this paper a stochastic model in BioAmbients calculus is presented, concerning mitochondrial fusion and its distribution in the renewal of mitochondrial population in a cell. This model describes the successive and dependent stages of protein synthesis, protein-s activation and merging of two independent mitochondria.

A Bi-Objective Model for Location-Allocation Problem within Queuing Framework

This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.

Probabilistic Model Development for Project Performance Forecasting

In this paper, based on the past project cost and time performance, a model for forecasting project cost performance is developed. This study presents a probabilistic project control concept to assure an acceptable forecast of project cost performance. In this concept project activities are classified into sub-groups entitled control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for each sub-group and the project SS-Curve is obtained by summing sub-groups- SS-Curves. In this model, project cost uncertainties are considered through Beta distribution functions of the project activities costs required to complete the project at every selected time sections through project accomplishment, which are extracted from a variety of sources. Based on this model, after a percentage of the project progress, the project performance is measured via Earned Value Management to adjust the primary cost probability distribution functions. Then, accordingly the future project cost performance is predicted by using the Monte-Carlo simulation method.

Mathematical Models of Flow Shop and Job Shop Scheduling Problems

In this paper, mathematical models for permutation flow shop scheduling and job shop scheduling problems are proposed. The first problem is based on a mixed integer programming model. As the problem is NP-complete, this model can only be used for smaller instances where an optimal solution can be computed. For large instances, another model is proposed which is suitable for solving the problem by stochastic heuristic methods. For the job shop scheduling problem, a mathematical model and its main representation schemes are presented.

Multiple Sequence Alignment Using Optimization Algorithms

Proteins or genes that have similar sequences are likely to perform the same function. One of the most widely used techniques for sequence comparison is sequence alignment. Sequence alignment allows mismatches and insertion/deletion, which represents biological mutations. Sequence alignment is usually performed only on two sequences. Multiple sequence alignment, is a natural extension of two-sequence alignment. In multiple sequence alignment, the emphasis is to find optimal alignment for a group of sequences. Several applicable techniques were observed in this research, from traditional method such as dynamic programming to the extend of widely used stochastic optimization method such as Genetic Algorithms (GAs) and Simulated Annealing. A framework with combination of Genetic Algorithm and Simulated Annealing is presented to solve Multiple Sequence Alignment problem. The Genetic Algorithm phase will try to find new region of solution while Simulated Annealing can be considered as an alignment improver for any near optimal solution produced by GAs.

Advanced Stochastic Models for Partially Developed Speckle

Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.

Comparative Study of Ant Colony and Genetic Algorithms for VLSI Circuit Partitioning

This paper presents a comparative study of Ant Colony and Genetic Algorithms for VLSI circuit bi-partitioning. Ant colony optimization is an optimization method based on behaviour of social insects [27] whereas Genetic algorithm is an evolutionary optimization technique based on Darwinian Theory of natural evolution and its concept of survival of the fittest [19]. Both the methods are stochastic in nature and have been successfully applied to solve many Non Polynomial hard problems. Results obtained show that Genetic algorithms out perform Ant Colony optimization technique when tested on the VLSI circuit bi-partitioning problem.

Joint Use of Factor Analysis (FA) and Data Envelopment Analysis (DEA) for Ranking of Data Envelopment Analysis

This article combines two techniques: data envelopment analysis (DEA) and Factor analysis (FA) to data reduction in decision making units (DMU). Data envelopment analysis (DEA), a popular linear programming technique is useful to rate comparatively operational efficiency of decision making units (DMU) based on their deterministic (not necessarily stochastic) input–output data and factor analysis techniques, have been proposed as data reduction and classification technique, which can be applied in data envelopment analysis (DEA) technique for reduction input – output data. Numerical results reveal that the new approach shows a good consistency in ranking with DEA.

Split-Pipe Design of Water Distribution Networks Using a Combination of Tabu Search and Genetic Algorithm

In this paper a combination approach of two heuristic-based algorithms: genetic algorithm and tabu search is proposed. It has been developed to obtain the least cost based on the split-pipe design of looped water distribution network. The proposed combination algorithm has been applied to solve the three well-known water distribution networks taken from the literature. The development of the combination of these two heuristic-based algorithms for optimization is aimed at enhancing their strengths and compensating their weaknesses. Tabu search is rather systematic and deterministic that uses adaptive memory in search process, while genetic algorithm is probabilistic and stochastic optimization technique in which the solution space is explored by generating candidate solutions. Split-pipe design may not be realistic in practice but in optimization purpose, optimal solutions are always achieved with split-pipe design. The solutions obtained in this study have proved that the least cost solutions obtained from the split-pipe design are always better than those obtained from the single pipe design. The results obtained from the combination approach show its ability and effectiveness to solve combinatorial optimization problems. The solutions obtained are very satisfactory and high quality in which the solutions of two networks are found to be the lowest-cost solutions yet presented in the literature. The concept of combination approach proposed in this study is expected to contribute some useful benefits in diverse problems.

Existence of Solution of Nonlinear Second Order Neutral Stochastic Differential Inclusions with Infinite Delay

The paper is concerned with the existence of solution of nonlinear second order neutral stochastic differential inclusions with infinite delay in a Hilbert Space. Sufficient conditions for the existence are obtained by using a fixed point theorem for condensing maps.

Noise-Improved Signal Detection in Nonlinear Threshold Systems

We discuss the signal detection through nonlinear threshold systems. The detection performance is assessed by the probability of error Per . We establish that: (1) when the signal is complete suprathreshold, noise always degrades the signal detection both in the single threshold system and in the parallel array of threshold devices. (2) When the signal is a little subthreshold, noise degrades signal detection in the single threshold system. But in the parallel array, noise can improve signal detection, i.e., stochastic resonance (SR) exists in the array. (3) When the signal is predominant subthreshold, noise always can improve signal detection and SR always exists not only in the single threshold system but also in the parallel array. (4) Array can improve signal detection by raising the number of threshold devices. These results extend further the applicability of SR in signal detection.

A New Approach of Fuzzy Methods for Evaluating of Hydrological Data

The main criteria of designing in the most hydraulic constructions essentially are based on runoff or discharge of water. Two of those important criteria are runoff and return period. Mostly, these measures are calculated or estimated by stochastic data. Another feature in hydrological data is their impreciseness. Therefore, in order to deal with uncertainty and impreciseness, based on Buckley-s estimation method, a new fuzzy method of evaluating hydrological measures are developed. The method introduces triangular shape fuzzy numbers for different measures in which both of the uncertainty and impreciseness concepts are considered. Besides, since another important consideration in most of the hydrological studies is comparison of a measure during different months or years, a new fuzzy method which is consistent with special form of proposed fuzzy numbers, is also developed. Finally, to illustrate the methods more explicitly, the two algorithms are tested on one simple example and a real case study.