Life Cycle Assessment of Seawater Desalinization in Western Australia

Perth will run out of available sustainable natural water resources by 2015 if nothing is done to slow usage rates, according to a Western Australian study [1]. Alternative water technology options need to be considered for the long-term guaranteed supply of water for agricultural, commercial, domestic and industrial purposes. Seawater is an alternative source of water for human consumption, because seawater can be desalinated and supplied in large quantities to a very high quality. While seawater desalination is a promising option, the technology requires a large amount of energy which is typically generated from fossil fuels. The combustion of fossil fuels emits greenhouse gases (GHG) and, is implicated in climate change. In addition to environmental emissions from electricity generation for desalination, greenhouse gases are emitted in the production of chemicals and membranes for water treatment. Since Australia is a signatory to the Kyoto Protocol, it is important to quantify greenhouse gas emissions from desalinated water production. A life cycle assessment (LCA) has been carried out to determine the greenhouse gas emissions from the production of 1 gigalitre (GL) of water from the new plant. In this LCA analysis, a new desalination plant that will be installed in Bunbury, Western Australia, and known as Southern Seawater Desalinization Plant (SSDP), was taken as a case study. The system boundary of the LCA mainly consists of three stages: seawater extraction, treatment and delivery. The analysis found that the equivalent of 3,890 tonnes of CO2 could be emitted from the production of 1 GL of desalinated water. This LCA analysis has also identified that the reverse osmosis process would cause the most significant greenhouse emissions as a result of the electricity used if this is generated from fossil fuels

Effects of Market Share and Diversification on Nonlife Insurers- Performance

The aim of this paper is to investigate the influence of market share and diversification on the nonlife insurers- performance. The underlying relationships have been investigated in different industries and different disciplines (economics, management...), still, no consistency exists either in the magnitude or statistical significance of the relationship between market share (and diversification as well) on one side and companies- performance on the other side. Moreover, the direction of the relationship is also somewhat questionable. While some authors find this relationship to be positive, the others reveal its negative association. In order to test the influence of market share and diversification on companies- performance in Croatian nonlife insurance industry for the period from 1999 to 2009, we designed an empirical model in which we included the following independent variables: firms- profitability from previous years, market share, diversification and control variables (i.e. ownership, industrial concentration, GDP per capita, inflation). Using the two-step generalized method of moments (GMM) estimator we found evidence of a positive and statistically significant influence of both, market share and diversification, on insurers- profitability.

Remarks Regarding Queuing Model and Packet Loss Probability for the Traffic with Self-Similar Characteristics

Network management techniques have long been of interest to the networking research community. The queue size plays a critical role for the network performance. The adequate size of the queue maintains Quality of Service (QoS) requirements within limited network capacity for as many users as possible. The appropriate estimation of the queuing model parameters is crucial for both initial size estimation and during the process of resource allocation. The accurate resource allocation model for the management system increases the network utilization. The present paper demonstrates the results of empirical observation of memory allocation for packet-based services.

Fire Spread Simulation Tool for Cruise Vessels

In 2002 an amendment to SOLAS opened for lightweight material constructions in vessels if the same fire safety as in steel constructions could be obtained. FISPAT (FIreSPread Analysis Tool) is a computer application that simulates fire spread and fault injection in cruise vessels and identifies fire sensitive areas. It was developed to analyze cruise vessel designs and provides a method to evaluate network layout and safety of cruise vessels. It allows fast, reliable and deterministic exhaustive simulations and presents the result in a graphical vessel model. By performing the analysis iteratively while altering the cruise vessel design it can be used along with fire chamber experiments to show that the lightweight design can be as safe as a steel construction and that SOLAS regulations are fulfilled.

Geometry Design Supported by Minimizing and Visualizing Collision in Dynamic Packing

This paper presents a method to support dynamic packing in cases when no collision-free path can be found. The method, which is primarily based on path planning and shrinking of geometries, suggests a minimal geometry design change that results in a collision-free assembly path. A supplementing approach to optimize geometry design change with respect to redesign cost is described. Supporting this dynamic packing method, a new method to shrink geometry based on vertex translation, interweaved with retriangulation, is suggested. The shrinking method requires neither tetrahedralization nor calculation of medial axis and it preserves the topology of the geometry, i.e. holes are neither lost nor introduced. The proposed methods are successfully applied on industrial geometries.

Performance Analysis of Chrominance Red and Chrominance Blue in JPEG

While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.

Assessing the Effects of Explosion Waves on Office and Residential Buildings

Explosions may cause intensive damage to buildings and sometimes lead to total and progressive destruction. Pressures induced by explosions are one of the most destructive loads a structure may experience. While designing structures for great explosions may be expensive and impractical, engineers are looking for methods for preventing destructions resulted from explosions. A favorable structural system is a system which does not disrupt totally due to local explosion, since such structures sustain less loss in comparison with structural ones which really bear the load and suddenly disrupt. Designing and establishing vital and necessary installations in a way that it is resistant against direct hit of bomb and rocket is not practical, economical, or expedient in many cases, because the cost of construction and installation with such specifications is several times more than the total cost of the related equipment.

Improvement of the Reliability of the Industrial Electric Networks

The continuity in the electric supply of the electric installations is becoming one of the main requirements of the electric supply network (generation, transmission, and distribution of the electric energy). The achievement of this requirement depends from one side on the structure of the electric network and on the other side on the avaibility of the reserve source provided to maintain the supply in case of failure of the principal one. The avaibility of supply does not only depends on the reliability parameters of the both sources (principal and reserve) but it also depends on the reliability of the circuit breaker which plays the role of interlocking the reserve source in case of failure of the principal one. In addition, the principal source being under operation, its control can be ideal and sure, however, for the reserve source being in stop, a preventive maintenances which proceed on time intervals (periodicity) and for well defined lengths of time are envisaged, so that this source will always available in case of the principal source failure. The choice of the periodicity of preventive maintenance of the source of reserve influences directly the reliability of the electric feeder system In this work and on the basis of the semi- markovian's processes, the influence of the time of interlocking the reserve source upon the reliability of an industrial electric network is studied and is given the optimal time of interlocking the reserve source in case of failure the principal one, also the influence of the periodicity of the preventive maintenance of the source of reserve is studied and is given the optimal periodicity.

Empirical Study on the Diffusion of Smartphones and Consumer Behaviour

In this research, the diffusion of innovation regarding smartphone usage is analysed through a consumer behaviour theory. This research aims to determine whether a pattern surrounding the diffusion of innovation exists. As a methodology, an empirical study of the switch from a conventional cell phone to a smartphone was performed. Specifically, a questionnaire survey was completed by general consumers, and the situational and behavioural characteristics of switching from a cell phone to a smartphone were analysed. In conclusion, we found that the speed of the diffusion of innovation, the consumer behaviour characteristics, and the utilities of the product vary according to the stage of the product life cycle.

Model of Continuous Cheese Whey Fermentation by Candida Pseudotropicalis

The utilization of cheese whey as a fermentation substrate to produce bio-ethanol is an effort to supply bio-ethanol demand as a renewable energy. Like other process systems, modeling is also required for fermentation process design, optimization and plant operation. This research aims to study the fermentation process of cheese whey by applying mathematics and fundamental concept in chemical engineering, and to investigate the characteristic of the cheese whey fermentation process. Steady state simulation results for inlet substrate concentration of 50, 100 and 150 g/l, and various values of hydraulic retention time, showed that the ethanol productivity maximum values were 0.1091, 0.3163 and 0.5639 g/l.h respectively. Those values were achieved at hydraulic retention time of 20 hours, which was the minimum value used in this modeling. This showed that operating reactor at low hydraulic retention time was favorable. Model of bio-ethanol production from cheese whey will enhance the understanding of what really happen in the fermentation process.

TSM: A Design Pattern to Make Ad-hoc BPMs Easy and Inexpensive in Workflow-aware MISs

Despite so many years- development, the mainstream of workflow solutions from IT industries has not made ad-hoc workflow-support easy or inexpensive in MIS. Moreover, most of academic approaches tend to make their resulted BPM (Business Process Management) more complex and clumsy since they used to necessitate modeling workflow. To cope well with various ad-hoc or casual requirements on workflows while still keeping things simple and inexpensive, the author puts forth first the TSM design pattern that can provide a flexible workflow control while minimizing demand of predefinitions and modeling workflow, which introduces a generic approach for building BPM in workflow-aware MISs (Management Information Systems) with low development and running expenses.

Dynamic Analyses for Passenger Volume of Domestic Airline and High Speed Rail

Discrete choice model is the most used methodology for studying traveler-s mode choice and demand. However, to calibrate the discrete choice model needs to have plenty of questionnaire survey. In this study, an aggregative model is proposed. The historical data of passenger volumes for high speed rail and domestic civil aviation are employed to calibrate and validate the model. In this study, different models are compared so as to propose the best one. From the results, systematic equations forecast better than single equation do. Models with the external variable, which is oil price, are better than models based on closed system assumption.

Measurement of Lead Pollution in the Air of Babylon Governorate/Iraq during Year 2010

This research aims to study the lead pollution in the air of Babylon governorate that resulted generally from vehicles exhausts in addition to industrial and human activities.Vehicles number in Babylon governorate increased significantly after year 2003 that resulted with increase in lead emissions into the air.Measurement of lead emissions was done in seven stations distributed randomly in Babylon governorate. These stations where located in Industrial (Al-Sena'ay) Quarter, 60 street (near to Babylon sewer directorate), 40 Street (near to the first intersection), Al-Hashmia city, Al-Mahaweel city, , Al- Musayab city in addition to another station in Sayd Idris village belong to Abugharaq district (Agricultural station for comparison). The measured concentrations in these stations were compared with the standard limits of Environmental Protection Agency EPA (2 μg /m3). The results of this study showed that the average of lead concentrations ,in Babylon governorate during year 2010, was (3.13 μg/m3) which was greater than standard limits (2 μg/m3). The maximum concentration of lead was (6.41 μg / m3) recorded in the Industrial (Al-Sena'ay) Quarter during April month, while the minimum concentrations was (0.36 μg / m3) recorded in the agricultural station (Abugharaq) during December month.

Dempster-Shafer Information Filtering in Multi-Modality Wireless Sensor Networks

A framework to estimate the state of dynamically varying environment where data are generated from heterogeneous sources possessing partial knowledge about the environment is presented. This is entirely derived within Dempster-Shafer and Evidence Filtering frameworks. The belief about the current state is expressed as belief and plausibility functions. An addition to Single Input Single Output Evidence Filter, Multiple Input Single Output Evidence Filtering approach is introduced. Variety of applications such as situational estimation of an emergency environment can be developed within the framework successfully. Fire propagation scenario is used to justify the proposed framework, simulation results are presented.

Low Energy Method for Data Delivery in Ubiquitous Network

Recent advances in wireless sensor networks have led to many routing methods designed for energy-efficiency in wireless sensor networks. Despite that many routing methods have been proposed in USN, a single routing method cannot be energy-efficient if the environment of the ubiquitous sensor network varies. We present the controlling network access to various hosts and the services they offer, rather than on securing them one by one with a network security model. When ubiquitous sensor networks are deployed in hostile environments, an adversary may compromise some sensor nodes and use them to inject false sensing reports. False reports can lead to not only false alarms but also the depletion of limited energy resource in battery powered networks. The interleaved hop-by-hop authentication scheme detects such false reports through interleaved authentication. This paper presents a LMDD (Low energy method for data delivery) algorithm that provides energy-efficiency by dynamically changing protocols installed at the sensor nodes. The algorithm changes protocols based on the output of the fuzzy logic which is the fitness level of the protocols for the environment.

A Robust LS-SVM Regression

In comparison to the original SVM, which involves a quadratic programming task; LS–SVM simplifies the required computation, but unfortunately the sparseness of standard SVM is lost. Another problem is that LS-SVM is only optimal if the training samples are corrupted by Gaussian noise. In Least Squares SVM (LS–SVM), the nonlinear solution is obtained, by first mapping the input vector to a high dimensional kernel space in a nonlinear fashion, where the solution is calculated from a linear equation set. In this paper a geometric view of the kernel space is introduced, which enables us to develop a new formulation to achieve a sparse and robust estimate.

Modeling the Fischer-Tropsch Reaction In a Slurry Bubble Column Reactor

Fischer-Tropsch synthesis is one of the most important catalytic reactions that convert the synthetic gas to light and heavy hydrocarbons. One of the main issues is selecting the type of reactor. The slurry bubble reactor is suitable choice for Fischer- Tropsch synthesis because of its good qualification to transfer heat and mass, high durability of catalyst, low cost maintenance and repair. The more common catalysts for Fischer-Tropsch synthesis are Iron-based and Cobalt-based catalysts, the advantage of these catalysts on each other depends on which type of hydrocarbons we desire to produce. In this study, Fischer-Tropsch synthesis is modeled with Iron and Cobalt catalysts in a slurry bubble reactor considering mass and momentum balance and the hydrodynamic relations effect on the reactor behavior. Profiles of reactant conversion and reactant concentration in gas and liquid phases were determined as the functions of residence time in the reactor. The effects of temperature, pressure, liquid velocity, reactor diameter, catalyst diameter, gasliquid and liquid-solid mass transfer coefficients and kinetic coefficients on the reactant conversion have been studied. With 5% increase of liquid velocity (with Iron catalyst), H2 conversions increase about 6% and CO conversion increase about 4%, With 8% increase of liquid velocity (with Cobalt catalyst), H2 conversions increase about 26% and CO conversion increase about 4%. With 20% increase of gas-liquid mass transfer coefficient (with Iron catalyst), H2 conversions increase about 12% and CO conversion increase about 10% and with Cobalt catalyst H2 conversions increase about 10% and CO conversion increase about 6%. Results show that the process is sensitive to gas-liquid mass transfer coefficient and optimum condition operation occurs in maximum possible liquid velocity. This velocity must be more than minimum fluidization velocity and less than terminal velocity in such a way that avoid catalysts particles from leaving the fluidized bed.

Information Transmission between Large and Small Stocks in the Korean Stock Market

Little attention has been paid to information transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found unidirectional return transmissions from the large stocks to medium and small stocks. This evidence indicates that pat information about the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to the medium and small stocks. This finding suggests that volatility in the medium and small stocks following a negative shock in the large stocks is larger than that following a positive shock in the large stocks.

DC Link Floating for Grid Connected PV Converters

Nowadays there are several grid connected converter in the grid system. These grid connected converters are generally the converters of renewable energy sources, industrial four quadrant drives and other converters with DC link. These converters are connected to the grid through a three phase bridge. The standards prescribe the maximal harmonic emission which could be easily limited with high switching frequency. The increased switching losses can be reduced to the half with the utilization of the wellknown Flat-top modulation. The suggested control method is the expansion of the Flat-top modulation with which the losses could be also reduced to the half compared to the Flat-top modulation. Comparing to traditional control these requirements can be simultaneously satisfied much better with the DLF (DC Link Floating) method.

Analytical Mathematical Expression for the Channel Capacity of a Power and Rate Simultaneous Adaptive Cellular DS/FFH-CDMA Systemin a Rayleigh Fading Channel

In this paper, an accurate theoretical analysis for the achievable average channel capacity (in the Shannon sense) per user of a hybrid cellular direct-sequence/fast frequency hopping code-division multiple-access (DS/FFH-CDMA) system operating in a Rayleigh fading environment is presented. The analysis covers the downlink operation and leads to the derivation of an exact mathematical expression between the normalized average channel capacity available to each system-s user, under simultaneous optimal power and rate adaptation and the system-s parameters, as the number of hops per bit, the processing gain applied, the number of users per cell and the received signal-tonoise power ratio over the signal bandwidth. Finally, numerical results are presented to illustrate the proposed mathematical analysis.