Development of a Comprehensive Electricity Generation Simulation Model Using a Mixed Integer Programming Approach

This paper presents the development of an electricity simulation model taking into account electrical network constraints, applied on the Belgian power system. The base of the model is optimizing an extensive Unit Commitment (UC) problem through the use of Mixed Integer Linear Programming (MILP). Electrical constraints are incorporated through the implementation of a DC load flow. The model encloses the Belgian power system in a 220 – 380 kV high voltage network (i.e., 93 power plants and 106 nodes). The model features the use of pumping storage facilities as well as the inclusion of spinning reserves in a single optimization process. Solution times of the model stay below reasonable values.

Peer Assessment in the Context of Project-Based Learning Online

The pedagogy project has been proven as an active learning method, which is used to develop learner-s skills and knowledge.The use of technology in the learning world, has filed several gaps in the implementation of teaching methods, and online evaluation of learners. However, the project methodology presents challenges in the assessment of learners online. Indeed, interoperability between E-learning platforms (LMS) is one of the major challenges of project-based learning assessment. Firstly, we have reviewed the characteristics of online assessment in the context of project-based teaching. We addressed the constraints encountered during the peer evaluation process. Our approach is to propose a meta-model, which will describe a language dedicated to the conception of peer assessment scenario in project-based learning. Then we illustrate our proposal by an instantiation of the meta-model through a business process in a scenario of collaborative assessment on line.

The Development of Decision Support System for Waste Management; a Review

Most Decision Support Systems (DSS) for waste management (WM) constructed are not widely marketed and lack practical applications. This is due to the number of variables and complexity of the mathematical models which include the assumptions and constraints required in decision making. The approach made by many researchers in DSS modelling is to isolate a few key factors that have a significant influence to the DSS. This segmented approach does not provide a thorough understanding of the complex relationships of the many elements involved. The various elements in constructing the DSS must be integrated and optimized in order to produce a viable model that is marketable and has practical application. The DSS model used in assisting decision makers should be integrated with GIS, able to give robust prediction despite the inherent uncertainties of waste generation and the plethora of waste characteristics, and gives optimal allocation of waste stream for recycling, incineration, landfill and composting.

Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques

The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.

Feature Based Dense Stereo Matching using Dynamic Programming and Color

This paper presents a new feature based dense stereo matching algorithm to obtain the dense disparity map via dynamic programming. After extraction of some proper features, we use some matching constraints such as epipolar line, disparity limit, ordering and limit of directional derivative of disparity as well. Also, a coarseto- fine multiresolution strategy is used to decrease the search space and therefore increase the accuracy and processing speed. The proposed method links the detected feature points into the chains and compares some of the feature points from different chains, to increase the matching speed. We also employ color stereo matching to increase the accuracy of the algorithm. Then after feature matching, we use the dynamic programming to obtain the dense disparity map. It differs from the classical DP methods in the stereo vision, since it employs sparse disparity map obtained from the feature based matching stage. The DP is also performed further on a scan line, between any matched two feature points on that scan line. Thus our algorithm is truly an optimization method. Our algorithm offers a good trade off in terms of accuracy and computational efficiency. Regarding the results of our experiments, the proposed algorithm increases the accuracy from 20 to 70%, and reduces the running time of the algorithm almost 70%.

Optimal SSSC Placement to ATC Enhancing in Power Systems

This paper reviews the optimization available transmission capability (ATC) of power systems using a device of FACTS named SSSC equipped with energy storage devices. So that, emplacement and improvement of parameters of SSSC will be illustrated. Thus, voltage magnitude constraints of network buses, line transient stability constraints and voltage breakdown constraints are considered. To help the calculations, a comprehensive program in DELPHI is provided, which is able to simulate and trace the parameters of SSSC has been installed on a specific line. Furthermore, the provided program is able to compute ATC, TTC and maximum value of their enhancement after using SSSC.

Modelling and Analysis of a Robust Control of Manufacturing Systems: Flow-Quality Approach

This paper proposes a modeling method of the laws controlling manufacturing systems with temporal and non temporal constraints. A methodology of robust control construction generating the margins of passive and active robustness is being elaborated. Indeed, two paramount models are presented in this paper. The first utilizes the P-time Petri Nets which is used to manage the flow type disturbances. The second, the quality model, exploits the Intervals Constrained Petri Nets (ICPN) tool which allows the system to preserve its quality specificities. The redundancy of the robustness of the elementary parameters between passive and active is also used. The final model built allows the correlation of temporal and non temporal criteria by putting two paramount models in interaction. To do so, a set of definitions and theorems are employed and affirmed by applicator examples.

An Integrated Design Evaluation and Assembly Sequence Planning Model using a Particle Swarm Optimization Approach

In the traditional concept of product life cycle management, the activities of design, manufacturing, and assembly are performed in a sequential way. The drawback is that the considerations in design may contradict the considerations in manufacturing and assembly. The different designs of components can lead to different assembly sequences. Therefore, in some cases, a good design may result in a high cost in the downstream assembly activities. In this research, an integrated design evaluation and assembly sequence planning model is presented. Given a product requirement, there may be several design alternative cases to design the components for the same product. If a different design case is selected, the assembly sequence for constructing the product can be different. In this paper, first, the designed components are represented by using graph based models. The graph based models are transformed to assembly precedence constraints and assembly costs. A particle swarm optimization (PSO) approach is presented by encoding a particle using a position matrix defined by the design cases and the assembly sequences. The PSO algorithm simultaneously performs design evaluation and assembly sequence planning with an objective of minimizing the total assembly costs. As a result, the design cases and the assembly sequences can both be optimized. The main contribution lies in the new concept of integrated design evaluation and assembly sequence planning model and the new PSO solution method. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly planning problem. In this paper, an example product is tested and illustrated.

Optimal Capacitor Placement in Distribution Feeders

Optimal capacitor allocation in distribution systems has been studied for a long times. It is an optimization problem which has an objective to define the optimal sizes and locations of capacitors to be installed. In this works, an overview of capacitor placement problem in distribution systems is briefly introduced. The objective functions and constraints of the problem are listed and the methodologies for solving the problem are summarized.

Fingerprint Compression Using Multiwavelets

Large volumes of fingerprints are collected and stored every day in a wide range of applications, including forensics, access control etc. It is evident from the database of Federal Bureau of Investigation (FBI) which contains more than 70 million finger prints. Compression of this database is very important because of this high Volume. The performance of existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform (DCT) scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties which are needed for better performance in compression. New class of wavelets called 'Multiwavelets' which posses more than one scaling filters overcomes this problem. The objective of this paper is to develop an efficient compression scheme and to obtain better quality and higher compression ratio through multiwavelet transform and embedded coding of multiwavelet coefficients through Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm. A comparison of the best known multiwavelets is made to the best known scalar wavelets. Both quantitative and qualitative measures of performance are examined for Fingerprints.

Complexity of Component-based Development of Embedded Systems

The paper discusses complexity of component-based development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability measurement are some ways to control the complexity.

Hull Separation Optimization of Catamaran Unmanned Surface Vehicle Powered with Hydrogen Fuel Cell

This paper presents an optimization of the hull separation, i.e. transverse clearance. The main objective is to identify the feasible speed ranges and find the optimum transverse clearance considering the minimum wave-making resistance. The dimensions and the weight of hardware systems installed in the catamaran structured fuel cell powered USV (Unmanned Surface Vehicle) were considered as constraints. As the CAE (Computer Aided Engineering) platform FRIENDSHIP-Framework was used. The hull surface modeling, DoE (Design of Experiment), Tangent search optimization, tool integration and the process automation were performed by FRIENDSHIP-Framework. The hydrodynamic result was evaluated by XPAN the potential solver of SHIPFLOW.

Optimum Shape and Design of Cooling Towers

The aim of the current study is to develop a numerical tool that is capable of achieving an optimum shape and design of hyperbolic cooling towers based on coupling a non-linear finite element model developed in-house and a genetic algorithm optimization technique. The objective function is set to be the minimum weight of the tower. The geometric modeling of the tower is represented by means of B-spline curves. The finite element method is applied to model the elastic buckling behaviour of a tower subjected to wind pressure and dead load. The study is divided into two main parts. The first part investigates the optimum shape of the tower corresponding to minimum weight assuming constant thickness. The study is extended in the second part by introducing the shell thickness as one of the design variables in order to achieve an optimum shape and design. Design, functionality and practicality constraints are applied.

Analysis of Message Authentication in Turbo Coded Halftoned Images using Exit Charts

Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).

Adaptation of Iterative Methods to Solve Fuzzy Mathematical Programming Problems

Based on the fuzzy set theory this work develops two adaptations of iterative methods that solve mathematical programming problems with uncertainties in the objective function and in the set of constraints. The first one uses the approach proposed by Zimmermann to fuzzy linear programming problems as a basis and the second one obtains cut levels and later maximizes the membership function of fuzzy decision making using the bound search method. We outline similarities between the two iterative methods studied. Selected examples from the literature are presented to validate the efficiency of the methods addressed.

Multiobjective Optimization Solution for Shortest Path Routing Problem

The shortest path routing problem is a multiobjective nonlinear optimization problem with constraints. This problem has been addressed by considering Quality of service parameters, delay and cost objectives separately or as a weighted sum of both objectives. Multiobjective evolutionary algorithms can find multiple pareto-optimal solutions in one single run and this ability makes them attractive for solving problems with multiple and conflicting objectives. This paper uses an elitist multiobjective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA), for solving the dynamic shortest path routing problem in computer networks. A priority-based encoding scheme is proposed for population initialization. Elitism ensures that the best solution does not deteriorate in the next generations. Results for a sample test network have been presented to demonstrate the capabilities of the proposed approach to generate well-distributed pareto-optimal solutions of dynamic routing problem in one single run. The results obtained by NSGA are compared with single objective weighting factor method for which Genetic Algorithm (GA) was applied.

A New Heuristic Statistical Methodology for Optimizing Queuing Networks Using Discreet Event Simulation

Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.

ANN based Multi Classifier System for Prediction of High Energy Shower Primary Energy and Core Location

Cosmic showers, during the transit through space, produce sub - products as a result of interactions with the intergalactic or interstellar medium which after entering earth generate secondary particles called Extensive Air Shower (EAS). Detection and analysis of High Energy Particle Showers involve a plethora of theoretical and experimental works with a host of constraints resulting in inaccuracies in measurements. Therefore, there exist a necessity to develop a readily available system based on soft-computational approaches which can be used for EAS analysis. This is due to the fact that soft computational tools such as Artificial Neural Network (ANN)s can be trained as classifiers to adapt and learn the surrounding variations. But single classifiers fail to reach optimality of decision making in many situations for which Multiple Classifier System (MCS) are preferred to enhance the ability of the system to make decisions adjusting to finer variations. This work describes the formation of an MCS using Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN) and Probabilistic Neural Network (PNN) with data inputs from correlation mapping Self Organizing Map (SOM) blocks and the output optimized by another SOM. The results show that the setup can be adopted for real time practical applications for prediction of primary energy and location of EAS from density values captured using detectors in a circular grid.

A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes

The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.

Drum-Buffer-Rope: The Technique to Plan and Control the Production Using Theory of Constraints

Theory of Constraints has been emerging as an important tool for optimization of manufacturing/service systems. Goldratt in his first book “ The Goal " gave the introduction on Theory of Constraints and its applications in a factory scenario. A large number of production managers around the globe read this book but only a few could implement it in their plants because the book did not explain the steps to implement TOC in the factory. To overcome these limitations, Goldratt wrote this book to explain TOC, DBR and the method to implement it. In this paper, an attempt has been made to summarize the salient features of TOC and DBR listed in the book and the correct approach to implement TOC in a factory setting. The simulator available along with the book was actually used by the authors and the claim of Goldratt regarding the use of DBR and Buffer management to ease the work of production managers was tested and was found to be correct.