Application of Transportation Linear Programming Algorithms to Cost Reduction in Nigeria Soft Drinks Industry

The transportation problems are primarily concerned with the optimal way in which products produced at different plants (supply origins) are transported to a number of warehouses or customers (demand destinations). The objective in a transportation problem is to fully satisfy the destination requirements within the operating production capacity constraints at the minimum possible cost. The objective of this study is to determine ways of minimizing transportation cost in order to maximum profit. Data were sourced from the records of the Distribution Department of 7-Up Bottling Company Plc., Ilorin, Kwara State, Nigeria. The data were computed and analyzed using the three methods of solving transportation problem. The result shows that the three methods produced the same total transportation costs amounting to N1, 358, 019, implying that any of the method can be adopted by the company in transporting its final products to the wholesale dealers in order to minimize total production cost. 

Performance Evaluation and Cost Analysis of Standby Systems

Pumping systems are an integral part of water desalination plants, their effective functioning is vital for the operation of a plant. In this research work, the reliability and availability of pressurized pumps in a reverse osmosis desalination plant are studied with the objective of finding configurations that provides optimal performance. Six configurations of a series system with different number of warm and cold standby components were examined. Closed form expressions for the mean time to failure (MTTF) and the long run availability are derived and compared under the assumption that the time between failures and repair times of the primary and standby components are exponentially distributed. Moreover, a cost/ benefit analysis is conducted in order to identify a configuration with the best performance and least cost. It is concluded that configurations with cold standby components are preferable especially when the pumps are of the size.

Modal Analysis of Machine Tool Column Using Finite Element Method

The performance of a machine tool is eventually assessed by its ability to produce a component of the required geometry in minimum time and at small operating cost. It is customary to base the structural design of any machine tool primarily upon the requirements of static rigidity and minimum natural frequency of vibration. The operating properties of machines like cutting speed, feed and depth of cut as well as the size of the work piece also have to be kept in mind by a machine tool structural designer. This paper presents a novel approach to the design of machine tool column for static and dynamic rigidity requirement. Model evaluation is done effectively through use of General Finite Element Analysis software ANSYS. Studies on machine tool column are used to illustrate finite element based concept evaluation technique. This paper also presents results obtained from the computations of thin walled box type columns that are subjected to torsional and bending loads in case of static analysis and also results from modal analysis. The columns analyzed are square and rectangle based tapered open column, column with cover plate, horizontal partitions and with apertures. For the analysis purpose a total of 70 columns were analyzed for bending, torsional and modal analysis. In this study it is observed that the orientation and aspect ratio of apertures have no significant effect on the static and dynamic rigidity of the machine tool structure.

A Source Point Distribution Scheme for Wave-Body Interaction Problem

A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.

Reduction of Rotor-Bearing-Support Finite Element Model through Substructuring

Due to simplicity and low cost, rotordynamic system is often modeled by using lumped parameters. Recently, finite elements have been used to model rotordynamic system as it offers higher accuracy. However, it involves high degrees of freedom. In some applications such as control design, this requires higher cost. For this reason, various model reduction methods have been proposed. This work demonstrates the quality of model reduction of rotor-bearing-support system through substructuring. The quality of the model reduction is evaluated by comparing some first natural frequencies, modal damping ratio, critical speeds, and response of both the full system and the reduced system. The simulation shows that the substructuring is proven adequate to reduce finite element rotor model in the frequency range of interest as long as the number and the location of master nodes are determined appropriately. However, the reduction is less accurate in an unstable or nearly-unstable system.

On the Computation of a Common n-finger Robotic Grasp for a Set of Objects

Industrial robotic arms utilize multiple end-effectors, each for a specific part and for a specific task. We propose a novel algorithm which will define a single end-effector’s configuration able to grasp a given set of objects with different geometries. The algorithm will have great benefit in production lines allowing a single robot to grasp various parts. Hence, reducing the number of endeffectors needed. Moreover, the algorithm will reduce end-effector design and manufacturing time and final product cost. The algorithm searches for a common grasp over the set of objects. The search algorithm maps all possible grasps for each object which satisfy a quality criterion and takes into account possible external wrenches (forces and torques) applied to the object. The mapped grasps are- represented by high-dimensional feature vectors which describes the shape of the gripper. We generate a database of all possible grasps for each object in the feature space. Then we use a search and classification algorithm for intersecting all possible grasps over all parts and finding a single common grasp suitable for all objects. We present simulations of planar and spatial objects to validate the feasibility of the approach.

Design of Walking Beam Pendle Axle Suspension System

This paper deals with design of walking beam pendel axle suspension system. This axles and suspension systems are mainly required for transportation of heavy duty and Over Dimension Consignment (ODC) cargo, which is exceeding legal limit in terms of length, width and height. Presently, in Indian transportation industry, ODC movement growth rate has increased in transportation of bridge sections (pre-cast beams), transformers, heavy machineries, boilers, gas turbines, windmill blades etc. However, current Indian standard road transport vehicles are facing lot of service and maintenance issues due to non availability of suitable axle and suspension to carry the ODC cargoes. This in turn will lead to increased number of road accidents, bridge collapse and delayed deliveries, which finally result in higher operating cost. Understanding these requirements, this work was carried out. These axles and suspensions are designed for optimum self – weight with maximum payload carrying capacity with better road stability.

Effectiveness of Business Software Systems Development and Enhancement Projects versus Work Effort Estimation Methods

Execution of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) is characterized by the exceptionally low effectiveness, leading to considerable financial losses. The general reason for low effectiveness of such projects is that they are inappropriately managed. One of the factors of proper BSS D&EP management is suitable (reliable and objective) method of project work effort estimation since this is what determines correct estimation of its major attributes: project cost and duration. BSS D&EP is usually considered to be accomplished effectively if product of a planned functionality is delivered without cost and time overrun. The goal of this paper is to prove that choosing approach to the BSS D&EP work effort estimation has a considerable influence on the effectiveness of such projects execution.

Comparison between Haar and Daubechies Wavelet Transformations on FPGA Technology

Recently, the Field Programmable Gate Array (FPGA) technology offers the potential of designing high performance systems at low cost. The discrete wavelet transform has gained the reputation of being a very effective signal analysis tool for many practical applications. However, due to its computation-intensive nature, current implementation of the transform falls short of meeting real-time processing requirements of most application. The objectives of this paper are implement the Haar and Daubechies wavelets using FPGA technology. In addition, the Bit Error Rate (BER) between the input audio signal and the reconstructed output signal for each wavelet is calculated. From the BER, it is seen that the implementations execute the operation of the wavelet transform correctly and satisfying the perfect reconstruction conditions. The design procedure has been explained and designed using the stat-ofart Electronic Design Automation (EDA) tools for system design on FPGA. Simulation, synthesis and implementation on the FPGA target technology has been carried out.

High Speed Video Transmission for Telemedicine using ATM Technology

In this paper, we study statistical multiplexing of VBR video in ATM networks. ATM promises to provide high speed realtime multi-point to central video transmission for telemedicine applications in rural hospitals and in emergency medical services. Video coders are known to produce variable bit rate (VBR) signals and the effects of aggregating these VBR signals need to be determined in order to design a telemedicine network infrastructure capable of carrying these signals. We first model the VBR video signal and simulate it using a generic continuous-data autoregressive (AR) scheme. We carry out the queueing analysis by the Fluid Approximation Model (FAM) and the Markov Modulated Poisson Process (MMPP). The study has shown a trade off: multiplexing VBR signals reduces burstiness and improves resource utilization, however, the buffer size needs to be increased with an associated economic cost. We also show that the MMPP model and the Fluid Approximation model fit best, respectively, the cell region and the burst region. Therefore, a hybrid MMPP and FAM completely characterizes the overall performance of the ATM statistical multiplexer. The ramifications of this technology are clear: speed, reliability (lower loss rate and jitter), and increased capacity in video transmission for telemedicine. With migration to full IP-based networks still a long way to achieving both high speed and high quality of service, the proposed ATM architecture will remain of significant use for telemedicine.

Application of “Multiple Risk Communicator“ to the Personal Information Leakage Problem

Along with the progress of our information society, various risks are becoming increasingly common, causing multiple social problems. For this reason, risk communications for establishing consensus among stakeholders who have different priorities have become important. However, it is not always easy for the decision makers to agree on measures to reduce risks based on opposing concepts, such as security, privacy and cost. Therefore, we previously developed and proposed the “Multiple Risk Communicator" (MRC) with the following functions: (1) modeling the support role of the risk specialist, (2) an optimization engine, and (3) displaying the computed results. In this paper, MRC program version 1.0 is applied to the personal information leakage problem. The application process and validation of the results are discussed.

MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Optimal All-to-All Personalized Communication in All-Port Tori

All-to-all personalized communication, also known as complete exchange, is one of the most dense communication patterns in parallel computing. In this paper, we propose new indirect algorithms for complete exchange on all-port ring and torus. The new algorithms fully utilize all communication links and transmit messages along shortest paths to completely achieve the theoretical lower bounds on message transmission, which have not be achieved among other existing indirect algorithms. For 2D r × c ( r % c ) all-port torus, the algorithm has time complexities of optimal transmission cost and O(c) message startup cost. In addition, the proposed algorithms accommodate non-power-of-two tori where the number of nodes in each dimension needs not be power-of-two or square. Finally, the algorithms are conceptually simple and symmetrical for every message and every node so that they can be easily implemented and achieve the optimum in practice.

Effect of Flowrate and Coolant Temperature on the Efficiency of Progressive Freeze Concentration on Simulated Wastewater

Freeze concentration freezes or crystallises the water molecules out as ice crystals and leaves behind a highly concentrated solution. In conventional suspension freeze concentration where ice crystals formed as a suspension in the mother liquor, separation of ice is difficult. The size of the ice crystals is still very limited which will require usage of scraped surface heat exchangers, which is very expensive and accounted for approximately 30% of the capital cost. This research is conducted using a newer method of freeze concentration, which is progressive freeze concentration. Ice crystals were formed as a layer on the designed heat exchanger surface. In this particular research, a helical structured copper crystallisation chamber was designed and fabricated. The effect of two operating conditions on the performance of the newly designed crystallisation chamber was investigated, which are circulation flowrate and coolant temperature. The performance of the design was evaluated by the effective partition constant, K, calculated from the volume and concentration of the solid and liquid phase. The system was also monitored by a data acquisition tool in order to see the temperature profile throughout the process. On completing the experimental work, it was found that higher flowrate resulted in a lower K, which translated into high efficiency. The efficiency is the highest at 1000 ml/min. It was also found that the process gives the highest efficiency at a coolant temperature of -6 °C.

PoPCoRN: A Power-Aware Periodic Surveillance Scheme in Convex Region using Wireless Mobile Sensor Networks

In this paper, the periodic surveillance scheme has been proposed for any convex region using mobile wireless sensor nodes. A sensor network typically consists of fixed number of sensor nodes which report the measurements of sensed data such as temperature, pressure, humidity, etc., of its immediate proximity (the area within its sensing range). For the purpose of sensing an area of interest, there are adequate number of fixed sensor nodes required to cover the entire region of interest. It implies that the number of fixed sensor nodes required to cover a given area will depend on the sensing range of the sensor as well as deployment strategies employed. It is assumed that the sensors to be mobile within the region of surveillance, can be mounted on moving bodies like robots or vehicle. Therefore, in our scheme, the surveillance time period determines the number of sensor nodes required to be deployed in the region of interest. The proposed scheme comprises of three algorithms namely: Hexagonalization, Clustering, and Scheduling, The first algorithm partitions the coverage area into fixed sized hexagons that approximate the sensing range (cell) of individual sensor node. The clustering algorithm groups the cells into clusters, each of which will be covered by a single sensor node. The later determines a schedule for each sensor to serve its respective cluster. Each sensor node traverses all the cells belonging to the cluster assigned to it by oscillating between the first and the last cell for the duration of its life time. Simulation results show that our scheme provides full coverage within a given period of time using few sensors with minimum movement, less power consumption, and relatively less infrastructure cost.

Anti-Counterfeiting Solution Employing Mobile RFID Environment

EPC Class-1 Generation-2 UHF tags, one of Radio frequency identification or RFID tag types, is expected that most companies are planning to use it in the supply chain in the short term and in consumer packaging in the long term due to its inexpensive cost. Because of the very cost, however, its resources are extremely scarce and it is hard to have any valuable security algorithms in it. It causes security vulnerabilities, in particular cloning the tags for counterfeits. In this paper, we propose a product authentication solution for anti-counterfeiting at application level in the supply chain and mobile RFID environment. It aims to become aware of distribution of spurious products with fake RFID tags and to provide a product authentication service to general consumers with mobile RFID devices like mobile phone or PDA which has a mobile RFID reader. We will discuss anti-counterfeiting mechanisms which are required to our proposed solution and address requirements that the mechanisms should have.

A Heuristic Algorithm Approach for Scheduling of Multi-criteria Unrelated Parallel Machines

In this paper we address a multi-objective scheduling problem for unrelated parallel machines. In unrelated parallel systems, the processing cost/time of a given job on different machines may vary. The objective of scheduling is to simultaneously determine the job-machine assignment and job sequencing on each machine. In such a way the total cost of the schedule is minimized. The cost function consists of three components, namely; machining cost, earliness/tardiness penalties and makespan related cost. Such scheduling problem is combinatorial in nature. Therefore, a Simulated Annealing approach is employed to provide good solutions within reasonable computational times. Computational results show that the proposed approach can efficiently solve such complicated problems.

Optimum Design of an Absorption Heat Pump Integrated with a Kraft Industry using Genetic Algorithm

In this study the integration of an absorption heat pump (AHP) with the concentration section of an industrial pulp and paper process is investigated using pinch technology. The optimum design of the proposed water-lithium bromide AHP is then achieved by minimizing the total annual cost. A comprehensive optimization is carried out by relaxation of all stream pressure drops as well as heat exchanger areas involving in AHP structure. It is shown that by applying genetic algorithm optimizer, the total annual cost of the proposed AHP is decreased by 18% compared to one resulted from simulation.

Fuzzy Cost Support Vector Regression

In this paper, a new version of support vector regression (SVR) is presented namely Fuzzy Cost SVR (FCSVR). Individual property of the FCSVR is operation over fuzzy data whereas fuzzy cost (fuzzy margin and fuzzy penalty) are maximized. This idea admits to have uncertainty in the penalty and margin terms jointly. Robustness against noise is shown in the experimental results as a property of the proposed method and superiority relative conventional SVR.

Spacecraft Neural Network Control System Design using FPGA

Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.