Convergence Analysis of the Generalized Alternating Two-Stage Method

In this paper, we give the generalized alternating twostage method in which the inner iterations are accomplished by a generalized alternating method. And we present convergence results of the method for solving nonsingular linear systems when the coefficient matrix of the linear system is a monotone matrix or an H-matrix.

Streamflow Modeling for a Small Watershed Using Limited Hydrological Data

This research was conducted in the Pua Watershed whereas located in the Upper Nan River Basin in Nan province, Thailand. Nan River basin originated in Nan province that comprises of many tributary streams to produce as inflow to the Sirikit dam provided huge reservoir with the storage capacity of 9510 million cubic meters. The common problems of most watersheds were found i.e. shortage water supply for consumption and agriculture utilizations, deteriorate of water quality, flood and landslide including debris flow, and unstable of riverbank. The Pua Watershed is one of several small river basins that flow through the Nan River Basin. The watershed includes 404 km2 representing the Pua District, the Upper Nan Basin, or the whole Nan River Basin, of 61.5%, 18.2% or 1.2% respectively. The Pua River is a main stream producing all year streamflow supplying the Pua District and an inflow to the Upper Nan Basin. Its length approximately 56.3 kilometers with an average slope of the channel by 1.9% measured. A diversion weir namely Pua weir bound the plain and mountainous areas with a very steep slope of the riverbed to 2.9% and drainage area of 149 km2 as upstream watershed while a mild slope of the riverbed to 0.2% found in a river reach of 20.3 km downstream of this weir, which considered as a gauged basin. However, the major branch streams of the Pua River are ungauged catchments namely: Nam Kwang and Nam Koon with the drainage area of 86 and 35 km2 respectively. These upstream watersheds produce runoff through the 3-streams downstream of Pua weir, Jao weir, and Kang weir, with an averaged annual runoff of 578 million cubic meters. They were analyzed using both statistical data at Pua weir and simulated data resulted from the hydrologic modeling system (HEC–HMS) which applied for the remaining ungauged basins. Since the Kwang and Koon catchments were limited with lack of hydrological data included streamflow and rainfall. Therefore, the mathematical modeling: HEC-HMS with the Snyder-s hydrograph synthesized and transposed methods were applied for those areas using calibrated hydrological parameters from the upstream of Pua weir with continuously daily recorded of streamflow and rainfall data during 2008-2011. The results showed that the simulated daily streamflow and sum up as annual runoff in 2008, 2010, and 2011 were fitted with observed annual runoff at Pua weir using the simple linear regression with the satisfied correlation R2 of 0.64, 062, and 0.59, respectively. The sensitivity of simulation results were come from difficulty using calibrated parameters i.e. lag-time, coefficient of peak flow, initial losses, uniform loss rates, and missing some daily observed data. These calibrated parameters were used to apply for the other 2-ungauged catchments and downstream catchments simulated.

A Serializability Condition for Multi-step Transactions Accessing Ordered Data

In mobile environments, unspecified numbers of transactions arrive in continuous streams. To prove correctness of their concurrent execution a method of modelling an infinite number of transactions is needed. Standard database techniques model fixed finite schedules of transactions. Lately, techniques based on temporal logic have been proposed as suitable for modelling infinite schedules. The drawback of these techniques is that proving the basic serializability correctness condition is impractical, as encoding (the absence of) conflict cyclicity within large sets of transactions results in prohibitively large temporal logic formulae. In this paper, we show that, under certain common assumptions on the graph structure of data items accessed by the transactions, conflict cyclicity need only be checked within all possible pairs of transactions. This results in formulae of considerably reduced size in any temporal-logic-based approach to proving serializability, and scales to arbitrary numbers of transactions.

Experimental and Theoretical Investigation on Notched Specimens Life Under Bending Loading

In this work, bending fatigue life of notched specimens with various notch geometries and dimensions is investigated by experiment and Manson-Caffin theoretical method. In this theoretical method, fatigue life of notched specimens is calculated using the fatigue life obtained from the experiments for plain specimens (without notch). Three notch geometries including ∪-shape, ∨-shape and C -shape notches are considered in this investigation. The experiments are conducted on a rotary bending Moore machine. The specimens are made of a low carbon steel alloy, which has wide application in industry. The stress- life curves are captured for all notched specimen by experiment. The results indicate that Manson-Caffin analytical method cannot adequately predict the fatigue life of notched specimen. However, it seems that the difference between the experiments and Manson-Caffin predictions can be compensated by a proportional factor.

Multi Switched Split Vector Quantization of Narrowband Speech Signals

Vector quantization is a powerful tool for speech coding applications. This paper deals with LPC Coding of speech signals which uses a new technique called Multi Switched Split Vector Quantization (MSSVQ), which is a hybrid of Multi, switched, split vector quantization techniques. The spectral distortion performance, computational complexity, and memory requirements of MSSVQ are compared to split vector quantization (SVQ), multi stage vector quantization(MSVQ) and switched split vector quantization (SSVQ) techniques. It has been proved from results that MSSVQ has better spectral distortion performance, lower computational complexity and lower memory requirements when compared to all the above mentioned product code vector quantization techniques. Computational complexity is measured in floating point operations (flops), and memory requirements is measured in (floats).

Coping with the Rapidity of Information Technology Changes – A Comparison Reviewon Current Practices

Information technology managers nowadays are facing with tremendous pressure to plan, implement, and adopt new technology solution due to the rapidity of technology changes. Resulted from a lack of study that have been done in this topic, the aim of this paper is to provide a comparison review on current tools that are currently being used in order to respond to technological changes. The study is based on extensive literature review of published works with majority of them are ranging from 2000 to the first part of 2011. The works were gathered from journals, books, and other information sources available on the Web. Findings show that, each tools has different focus and none of the tools are providing a framework in holistic view, which should include technical, people, process, and business environment aspect. Hence, this result provides potential information about current available tools that IT managers could use to manage changes in technology. Further, the result reveals a research gap in the area where the industries a short of such framework.

Sovereign Credit Risk Measures

This paper focuses on sovereign credit risk meaning a hot topic related to the current Eurozone crisis. In the light of the recent financial crisis, market perception of the creditworthiness of individual sovereigns has changed significantly. Before the outbreak of the financial crisis, market participants did not differentiate between credit risk born by individual states despite different levels of public indebtedness. In the proceeding of the financial crisis, the market participants became aware of the worsening fiscal situation in the European countries and started to discriminate among government issuers. Concerns about the increasing sovereign risk were reflected in surging sovereign risk premium. The main of this paper is to shed light on the characteristics of the sovereign risk with the special attention paid to the mutual relation between credit spread and the CDS premium as the main measures of the sovereign risk premium.

Fast Depth Estimation with Filters

Fast depth estimation from binocular vision is often desired for autonomous vehicles, but, most algorithms could not easily be put into practice because of the much time cost. We present an image-processing technique that can fast estimate depth image from binocular vision images. By finding out the lines which present the best matched area in the disparity space image, the depth can be estimated. When detecting these lines, an edge-emphasizing filter is used. The final depth estimation will be presented after the smooth filter. Our method is a compromise between local methods and global optimization.

Application of Novel Conserving Immersed Boundary Method to Moving Boundary Problem

A new conserving approach in the context of Immersed Boundary Method (IBM) is presented to simulate one dimensional, incompressible flow in a moving boundary problem. The method employs control volume scheme to simulate the flow field. The concept of ghost node is used at the boundaries to conserve the mass and momentum equations. The Present method implements the conservation laws in all cells including boundary control volumes. Application of the method is studied in a test case with moving boundary. Comparison between the results of this new method and a sharp interface (Image Point Method) IBM algorithm shows a well distinguished improvement in both pressure and velocity fields of the present method. Fluctuations in pressure field are fully resolved in this proposed method. This approach expands the IBM capability to simulate flow field for variety of problems by implementing conservation laws in a fully Cartesian grid compared to other conserving methods.

The Effect of Electrical Stimulation Intensity on VEGF Expression and Biomechanical Properties during Wound

We evaluated the effect of sensory (direct current (DC), 600μA) and motor (monophasic current, pulse duration 300μs, 100 Hz, 2.5-3mA) intensities of cathodal electrical stimulation (ES) current to release VEGF and biomechanical properties of wound. 54 male Sprague-dawley rats were randomly assigned into one control and two experimental groups. A full thickness skin incision was made on animals- dorsal region. The experimental groups received ES for 1h/day and every other day. VEGF expression was measured in skin on the 7th day after surgical incision and tensile strength was measured on 21st day. On the 7th day, the values of skin VEGF in the sensory group were significantly greater than those of the other groups (p < 0.05). Sensory and Motor intensity stimulation, can not improve the biomechanical properties of the repaired wounds. It seems the mechanical environment induced by sensory and motor intensity of electrical stimulation, could not simulate the role of normal daily stress and strain to maturation of collagen fibers and their cross links. Further work is needed to determine the relationship between VEGF expression after ES and its effect on tensile strength of healed wound.

Peakwise Smoothing of Data Models using Wavelets

Smoothing or filtering of data is first preprocessing step for noise suppression in many applications involving data analysis. Moving average is the most popular method of smoothing the data, generalization of this led to the development of Savitzky-Golay filter. Many window smoothing methods were developed by convolving the data with different window functions for different applications; most widely used window functions are Gaussian or Kaiser. Function approximation of the data by polynomial regression or Fourier expansion or wavelet expansion also gives a smoothed data. Wavelets also smooth the data to great extent by thresholding the wavelet coefficients. Almost all smoothing methods destroys the peaks and flatten them when the support of the window is increased. In certain applications it is desirable to retain peaks while smoothing the data as much as possible. In this paper we present a methodology called as peak-wise smoothing that will smooth the data to any desired level without losing the major peak features.

Green Building and Energy Saving

In a world of climate change and limited fossil fuel resources, renewable energy sources are playing an increasingly important role. Due to industrializations and population growth our economy and technologies today largely depend upon natural resources, which are not replaceable. Approximately 90% of our energy consumption comes from fossil fuels (viz. coal, oil and natural gas). The irony is that these resources are depleting. Also, the huge consumption of fossil fuels has caused visible damage to the environment in various forms viz. global warming, acid rains etc.

Surface Flattening based on Linear-Elastic Finite Element Method

This paper presents a linear-elastic finite element method based flattening algorithm for three dimensional triangular surfaces. First, an intrinsic characteristic preserving method is used to obtain the initial developing graph, which preserves the angles and length ratios between two adjacent edges. Then, an iterative equation is established based on linear-elastic finite element method and the flattening result with an equilibrium state of internal force is obtained by solving this iterative equation. The results show that complex surfaces can be dealt with this proposed method, which is an efficient tool for the applications in computer aided design, such as mould design.

A New Model of English-Vietnamese Bilingual Information Retrieval System

In this paper, we propose a new model of English- Vietnamese bilingual Information Retrieval system. Although there are so many CLIR systems had been researched and built, the accuracy of searching results in different languages that the CLIR system supports still need to improve, especially in finding bilingual documents. The problems identified in this paper are the limitation of machine translation-s result and the extra large collections of document to be found. So we try to establish a different model to overcome these problems.

Computer Aided X-Ray Diffraction Intensity Analysis for Spinels: Hands-On Computing Experience

The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.

Modified Fuzzy ARTMAP and Supervised Fuzzy ART: Comparative Study with Multispectral Classification

In this article a modification of the algorithm of the fuzzy ART network, aiming at returning it supervised is carried out. It consists of the search for the comparison, training and vigilance parameters giving the minimum quadratic distances between the output of the training base and those obtained by the network. The same process is applied for the determination of the parameters of the fuzzy ARTMAP giving the most powerful network. The modification consist in making learn the fuzzy ARTMAP a base of examples not only once as it is of use, but as many time as its architecture is in evolution or than the objective error is not reached . In this way, we don-t worry about the values to impose on the eight (08) parameters of the network. To evaluate each one of these three networks modified, a comparison of their performances is carried out. As application we carried out a classification of the image of Algiers-s bay taken by SPOT XS. We use as criterion of evaluation the training duration, the mean square error (MSE) in step control and the rate of good classification per class. The results of this study presented as curves, tables and images show that modified fuzzy ARTMAP presents the best compromise quality/computing time.

The Effects of Various Boundary Conditions on Thermal Buckling of Functionally Graded Beamwith Piezoelectric Layers Based on Third order Shear Deformation Theory

This article attempts to analyze functionally graded beam thermal buckling along with piezoelectric layers applying based on the third order shearing deformation theory considering various boundary conditions. The beam properties are assumed to vary continuously from the lower surface to the upper surface of the beam. The equilibrium equations are derived using the total potential energy equations, Euler equations, piezoelectric material constitutive equations and third order shear deformation theory assumptions. In order to fulfill such an aim, at first functionally graded beam with piezoelectric layers applying the third order shearing deformation theory along with clamped -clamped boundary conditions are thoroughly analyzed, and then following making sure of the correctness of all the equations, the very same beam is analyzed with piezoelectric layers through simply-simply and simply-clamped boundary conditions. In this article buckling critical temperature for functionally graded beam is derived in two different ways, without piezoelectric layer and with piezoelectric layer and the results are compared together. Finally, all the conclusions obtained will be compared and contrasted with the same samples in the same and distinguished conditions through tables and charts. It would be noteworthy that in this article, the software MAPLE has been applied in order to do the numeral calculations.

Learning to Order Terms: Supervised Interestingness Measures in Terminology Extraction

Term Extraction, a key data preparation step in Text Mining, extracts the terms, i.e. relevant collocation of words, attached to specific concepts (e.g. genetic-algorithms and decisiontrees are terms associated to the concept “Machine Learning" ). In this paper, the task of extracting interesting collocations is achieved through a supervised learning algorithm, exploiting a few collocations manually labelled as interesting/not interesting. From these examples, the ROGER algorithm learns a numerical function, inducing some ranking on the collocations. This ranking is optimized using genetic algorithms, maximizing the trade-off between the false positive and true positive rates (Area Under the ROC curve). This approach uses a particular representation for the word collocations, namely the vector of values corresponding to the standard statistical interestingness measures attached to this collocation. As this representation is general (over corpora and natural languages), generality tests were performed by experimenting the ranking function learned from an English corpus in Biology, onto a French corpus of Curriculum Vitae, and vice versa, showing a good robustness of the approaches compared to the state-of-the-art Support Vector Machine (SVM).

Prediction of Natural Gas Viscosity using Artificial Neural Network Approach

Prediction of viscosity of natural gas is an important parameter in the energy industries such as natural gas storage and transportation. In this study viscosity of different compositions of natural gas is modeled by using an artificial neural network (ANN) based on back-propagation method. A reliable database including more than 3841 experimental data of viscosity for testing and training of ANN is used. The designed neural network can predict the natural gas viscosity using pseudo-reduced pressure and pseudo-reduced temperature with AARD% of 0.221. The accuracy of designed ANN has been compared to other published empirical models. The comparison indicates that the proposed method can provide accurate results.

Fire Spread Simulation Tool for Cruise Vessels

In 2002 an amendment to SOLAS opened for lightweight material constructions in vessels if the same fire safety as in steel constructions could be obtained. FISPAT (FIreSPread Analysis Tool) is a computer application that simulates fire spread and fault injection in cruise vessels and identifies fire sensitive areas. It was developed to analyze cruise vessel designs and provides a method to evaluate network layout and safety of cruise vessels. It allows fast, reliable and deterministic exhaustive simulations and presents the result in a graphical vessel model. By performing the analysis iteratively while altering the cruise vessel design it can be used along with fire chamber experiments to show that the lightweight design can be as safe as a steel construction and that SOLAS regulations are fulfilled.