Fingerprint Compression Using Multiwavelets

Large volumes of fingerprints are collected and stored every day in a wide range of applications, including forensics, access control etc. It is evident from the database of Federal Bureau of Investigation (FBI) which contains more than 70 million finger prints. Compression of this database is very important because of this high Volume. The performance of existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform (DCT) scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties which are needed for better performance in compression. New class of wavelets called 'Multiwavelets' which posses more than one scaling filters overcomes this problem. The objective of this paper is to develop an efficient compression scheme and to obtain better quality and higher compression ratio through multiwavelet transform and embedded coding of multiwavelet coefficients through Set Partitioning In Hierarchical Trees algorithm (SPIHT) algorithm. A comparison of the best known multiwavelets is made to the best known scalar wavelets. Both quantitative and qualitative measures of performance are examined for Fingerprints.

Finding a Solution, all Solutions, or the Most Probable Solution to a Temporal Interval Algebra Network

Over the years, many implementations have been proposed for solving IA networks. These implementations are concerned with finding a solution efficiently. The primary goal of our implementation is simplicity and ease of use. We present an IA network implementation based on finite domain non-binary CSPs, and constraint logic programming. The implementation has a GUI which permits the drawing of arbitrary IA networks. We then show how the implementation can be extended to find all the solutions to an IA network. One application of finding all the solutions, is solving probabilistic IA networks.

Complexity of Component-based Development of Embedded Systems

The paper discusses complexity of component-based development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability measurement are some ways to control the complexity.

Hull Separation Optimization of Catamaran Unmanned Surface Vehicle Powered with Hydrogen Fuel Cell

This paper presents an optimization of the hull separation, i.e. transverse clearance. The main objective is to identify the feasible speed ranges and find the optimum transverse clearance considering the minimum wave-making resistance. The dimensions and the weight of hardware systems installed in the catamaran structured fuel cell powered USV (Unmanned Surface Vehicle) were considered as constraints. As the CAE (Computer Aided Engineering) platform FRIENDSHIP-Framework was used. The hull surface modeling, DoE (Design of Experiment), Tangent search optimization, tool integration and the process automation were performed by FRIENDSHIP-Framework. The hydrodynamic result was evaluated by XPAN the potential solver of SHIPFLOW.

Optimum Shape and Design of Cooling Towers

The aim of the current study is to develop a numerical tool that is capable of achieving an optimum shape and design of hyperbolic cooling towers based on coupling a non-linear finite element model developed in-house and a genetic algorithm optimization technique. The objective function is set to be the minimum weight of the tower. The geometric modeling of the tower is represented by means of B-spline curves. The finite element method is applied to model the elastic buckling behaviour of a tower subjected to wind pressure and dead load. The study is divided into two main parts. The first part investigates the optimum shape of the tower corresponding to minimum weight assuming constant thickness. The study is extended in the second part by introducing the shell thickness as one of the design variables in order to achieve an optimum shape and design. Design, functionality and practicality constraints are applied.

Analysis of Message Authentication in Turbo Coded Halftoned Images using Exit Charts

Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).

Adaptation of Iterative Methods to Solve Fuzzy Mathematical Programming Problems

Based on the fuzzy set theory this work develops two adaptations of iterative methods that solve mathematical programming problems with uncertainties in the objective function and in the set of constraints. The first one uses the approach proposed by Zimmermann to fuzzy linear programming problems as a basis and the second one obtains cut levels and later maximizes the membership function of fuzzy decision making using the bound search method. We outline similarities between the two iterative methods studied. Selected examples from the literature are presented to validate the efficiency of the methods addressed.

Multiobjective Optimization Solution for Shortest Path Routing Problem

The shortest path routing problem is a multiobjective nonlinear optimization problem with constraints. This problem has been addressed by considering Quality of service parameters, delay and cost objectives separately or as a weighted sum of both objectives. Multiobjective evolutionary algorithms can find multiple pareto-optimal solutions in one single run and this ability makes them attractive for solving problems with multiple and conflicting objectives. This paper uses an elitist multiobjective evolutionary algorithm based on the Non-dominated Sorting Genetic Algorithm (NSGA), for solving the dynamic shortest path routing problem in computer networks. A priority-based encoding scheme is proposed for population initialization. Elitism ensures that the best solution does not deteriorate in the next generations. Results for a sample test network have been presented to demonstrate the capabilities of the proposed approach to generate well-distributed pareto-optimal solutions of dynamic routing problem in one single run. The results obtained by NSGA are compared with single objective weighting factor method for which Genetic Algorithm (GA) was applied.

A New Heuristic Statistical Methodology for Optimizing Queuing Networks Using Discreet Event Simulation

Most of the real queuing systems include special properties and constraints, which can not be analyzed directly by using the results of solved classical queuing models. Lack of Markov chains features, unexponential patterns and service constraints, are the mentioned conditions. This paper represents an applied general algorithm for analysis and optimizing the queuing systems. The algorithm stages are described through a real case study. It is consisted of an almost completed non-Markov system with limited number of customers and capacities as well as lots of common exception of real queuing networks. Simulation is used for optimizing this system. So introduced stages over the following article include primary modeling, determining queuing system kinds, index defining, statistical analysis and goodness of fit test, validation of model and optimizing methods of system with simulation.

ANN based Multi Classifier System for Prediction of High Energy Shower Primary Energy and Core Location

Cosmic showers, during the transit through space, produce sub - products as a result of interactions with the intergalactic or interstellar medium which after entering earth generate secondary particles called Extensive Air Shower (EAS). Detection and analysis of High Energy Particle Showers involve a plethora of theoretical and experimental works with a host of constraints resulting in inaccuracies in measurements. Therefore, there exist a necessity to develop a readily available system based on soft-computational approaches which can be used for EAS analysis. This is due to the fact that soft computational tools such as Artificial Neural Network (ANN)s can be trained as classifiers to adapt and learn the surrounding variations. But single classifiers fail to reach optimality of decision making in many situations for which Multiple Classifier System (MCS) are preferred to enhance the ability of the system to make decisions adjusting to finer variations. This work describes the formation of an MCS using Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN) and Probabilistic Neural Network (PNN) with data inputs from correlation mapping Self Organizing Map (SOM) blocks and the output optimized by another SOM. The results show that the setup can be adopted for real time practical applications for prediction of primary energy and location of EAS from density values captured using detectors in a circular grid.

A PIM (Processor-In-Memory) for Computer Graphics : Data Partitioning and Placement Schemes

The demand for higher performance graphics continues to grow because of the incessant desire towards realism. And, rapid advances in fabrication technology have enabled us to build several processor cores on a single die. Hence, it is important to develop single chip parallel architectures for such data-intensive applications. In this paper, we propose an efficient PIM architectures tailored for computer graphics which requires a large number of memory accesses. We then address the two important tasks necessary for maximally exploiting the parallelism provided by the architecture, namely, partitioning and placement of graphic data, which affect respectively load balances and communication costs. Under the constraints of uniform partitioning, we develop approaches for optimal partitioning and placement, which significantly reduce search space. We also present heuristics for identifying near-optimal placement, since the search space for placement is impractically large despite our optimization. We then demonstrate the effectiveness of our partitioning and placement approaches via analysis of example scenes; simulation results show considerable search space reductions, and our heuristics for placement performs close to optimal – the average ratio of communication overheads between our heuristics and the optimal was 1.05. Our uniform partitioning showed average load-balance ratio of 1.47 for geometry processing and 1.44 for rasterization, which is reasonable.

Multi-Stakeholder Road Pricing Game: Solution Concepts

A road pricing game is a game where various stakeholders and/or regions with different (and usually conflicting) objectives compete for toll setting in a given transportation network to satisfy their individual objectives. We investigate some classical game theoretical solution concepts for the road pricing game. We establish results for the road pricing game so that stakeholders and/or regions playing such a game will beforehand know what is obtainable. This will save time and argument, and above all, get rid of the feelings of unfairness among the competing actors and road users. Among the classical solution concepts we investigate is Nash equilibrium. In particular, we show that no pure Nash equilibrium exists among the actors, and further illustrate that even “mixed Nash equilibrium" may not be achievable in the road pricing game. The paper also demonstrates the type of coalitions that are not only reachable, but also stable and profitable for the actors involved.

Drum-Buffer-Rope: The Technique to Plan and Control the Production Using Theory of Constraints

Theory of Constraints has been emerging as an important tool for optimization of manufacturing/service systems. Goldratt in his first book “ The Goal " gave the introduction on Theory of Constraints and its applications in a factory scenario. A large number of production managers around the globe read this book but only a few could implement it in their plants because the book did not explain the steps to implement TOC in the factory. To overcome these limitations, Goldratt wrote this book to explain TOC, DBR and the method to implement it. In this paper, an attempt has been made to summarize the salient features of TOC and DBR listed in the book and the correct approach to implement TOC in a factory setting. The simulator available along with the book was actually used by the authors and the claim of Goldratt regarding the use of DBR and Buffer management to ease the work of production managers was tested and was found to be correct.

Introduction of Open-Source e-Learning Environment and Resources: A Novel Approach for Secondary Schools in Tanzania

The concept of e-Learning is now emerging in Sub Saharan African countries like Tanzania. Due to economic constraints and other social and cultural factors faced by these countries, the use of Information and Communication Technology (ICT) is increasing at a very low pace. The digital divide threat has propelled the Government of Tanzania to put in place the national ICT Policy in 2003 which defines the direction of all ICT activities nationally. Among the main focused areas is the use of ICT in education, since for the development of any country, there is a need of creating knowledge based society. This paper discusses the initiatives made so far to introduce the use of ICT tools to some secondary schools using open source software in e-content development to facilitate a self-learning environment

A Fast Directionally Constrained Minimization of Power Algorithm for Extracting a Speech Signal Perpendicular to a Microphone Array

In this paper, an extended method of the directionally constrained minimization of power (DCMP) algorithm for broadband signals is proposed. The DCMP algorithm is one of the useful techniques of extracting a target signal from observed signals of a microphone array system. In the DCMP algorithm, output power of the microphone array is minimized under a constraint of constant responses to directions of arrival (DOAs) of specific signals. In our algorithm, by limiting the directional constraint to the perpendicular direction to the sensor array system, the calculating time is reduced.

The use of a Bespoke Computer Game For Teaching Analogue Electronics

An implementation of a design for a game based virtual learning environment is described. The game is developed for a course in analogue electronics, and the topic is the design of a power supply. This task can be solved in a number of different ways, with certain constraints, giving the students a certain amount of freedom, although the game is designed not to facilitate trial-and error approach. The use of storytelling and a virtual gaming environment provides the student with the learning material in a MMORPG environment. The game is tested on a group of second year electrical engineering students with good results.

Evaluation of Optimum Performance of Lateral Intakes

In designing river intakes and diversion structures, it is paramount that the sediments entering the intake are minimized or, if possible, completely separated. Due to high water velocity, sediments can significantly damage hydraulic structures especially when mechanical equipment like pumps and turbines are used. This subsequently results in wasting water, electricity and further costs. Therefore, it is prudent to investigate and analyze the performance of lateral intakes affected by sediment control structures. Laboratory experiments, despite their vast potential and benefits, can face certain limitations and challenges. Some of these include: limitations in equipment and facilities, space constraints, equipment errors including lack of adequate precision or mal-operation, and finally, human error. Research has shown that in order to achieve the ultimate goal of intake structure design – which is to design longlasting and proficient structures – the best combination of sediment control structures (such as sill and submerged vanes) along with parameters that increase their performance (such as diversion angle and location) should be determined. Cost, difficulty of execution and environmental impacts should also be included in evaluating the optimal design. This solution can then be applied to similar problems in the future. Subsequently, the model used to arrive at the optimal design requires high level of accuracy and precision in order to avoid improper design and execution of projects. Process of creating and executing the design should be as comprehensive and applicable as possible. Therefore, it is important that influential parameters and vital criteria is fully understood and applied at all stages of choosing the optimal design. In this article, influential parameters on optimal performance of the intake, advantages and disadvantages, and efficiency of a given design are studied. Then, a multi-criterion decision matrix is utilized to choose the optimal model that can be used to determine the proper parameters in constructing the intake.

The Effects of Detector Spacing on Travel Time Prediction on Freeways

Loop detectors report traffic characteristics in real time. They are at the core of traffic control process. Intuitively, one would expect that as density of detection increases, so would the quality of estimates derived from detector data. However, as detector deployment increases, the associated operating and maintenance cost increases. Thus, traffic agencies often need to decide where to add new detectors and which detectors should continue receiving maintenance, given their resource constraints. This paper evaluates the effect of detector spacing on freeway travel time estimation. A freeway section (Interstate-15) in Salt Lake City metropolitan region is examined. The research reveals that travel time accuracy does not necessarily deteriorate with increased detector spacing. Rather, the actual location of detectors has far greater influence on the quality of travel time estimates. The study presents an innovative computational approach that delivers optimal detector locations through a process that relies on Genetic Algorithm formulation.

Trace Emergence of Ants- Traffic Flow, based upon Exclusion Process

Biological evolution has generated a rich variety of successful solutions; from nature, optimized strategies can be inspired. One interesting example is the ant colonies, which are able to exhibit a collective intelligence, still that their dynamic is simple. The emergence of different patterns depends on the pheromone trail, leaved by the foragers. It serves as positive feedback mechanism for sharing information. In this paper, we use the dynamic of TASEP as a model of interaction at a low level of the collective environment in the ant-s traffic flow. This work consists of modifying the movement rules of particles “ants" belonging to the TASEP model, so that it adopts with the natural movement of ants. Therefore, as to respect the constraints of having no more than one particle per a given site, and in order to avoid collision within a bidirectional circulation, we suggested two strategies: decease strategy and waiting strategy. As a third work stage, this is devoted to the study of these two proposed strategies- stability. As a final work stage, we applied the first strategy to the whole environment, in order to get to the emergence of traffic flow, which is a way of learning.

Research on Weakly Hard Real-Time Constraints and Their Boolean Combination to Support Adaptive QoS

Advances in computing applications in recent years have prompted the demand for more flexible scheduling models for QoS demand. Moreover, in practical applications, partly violated temporal constraints can be tolerated if the violation meets certain distribution. So we need extend the traditional Liu and Lanland model to adapt to these circumstances. There are two extensions, which are the (m, k)-firm model and Window-Constrained model. This paper researches on weakly hard real-time constraints and their combination to support QoS. The fact that a practical application can tolerate some violations of temporal constraint under certain distribution is employed to support adaptive QoS on the open real-time system. The experiment results show these approaches are effective compared to traditional scheduling algorithms.