Application of Acidithiobacillus ferrooxidans in Desulfurization of US Coal: 10 L Batch Stirred Reactor Study

The desulfurization of coal using biological methods is an emerging technology. The biodesulfurization process uses the catalytic activity of chemolithotrophic acidpohiles in removing sulfur and pyrite from the coal. The present study was undertaken to examine the potential of Acidithiobacillus ferrooxidans in removing the pyritic sulfur and iron from high iron and sulfur containing US coal. The experiment was undertaken in 10 L batch stirred tank reactor having 10% pulp density of coal. The reactor was operated under mesophilic conditions and aerobic conditions were maintained by sparging the air into the reactor. After 35 days of experiment, about 64% of pyrite and 21% of pyritic sulfur was removed from the coal. The findings of the present study indicate that the biodesulfurization process does have potential in treating the high pyrite and sulfur containing coal. A good mass balance was also obtained with net loss of about 5% showing its feasibility for large scale application.

Understanding and Designing Situation-Aware Mobile and Ubiquitous Computing Systems

Using spatial models as a shared common basis of information about the environment for different kinds of contextaware systems has been a heavily researched topic in the last years. Thereby the research focused on how to create, to update, and to merge spatial models so as to enable highly dynamic, consistent and coherent spatial models at large scale. In this paper however, we want to concentrate on how context-aware applications could use this information so as to adapt their behavior according to the situation they are in. The main idea is to provide the spatial model infrastructure with a situation recognition component based on generic situation templates. A situation template is – as part of a much larger situation template library – an abstract, machinereadable description of a certain basic situation type, which could be used by different applications to evaluate their situation. In this paper, different theoretical and practical issues – technical, ethical and philosophical ones – are discussed important for understanding and developing situation dependent systems based on situation templates. A basic system design is presented which allows for the reasoning with uncertain data using an improved version of a learning algorithm for the automatic adaption of situation templates. Finally, for supporting the development of adaptive applications, we present a new situation-aware adaptation concept based on workflows.

A Novel VLSI Architecture for Image Compression Model Using Low power Discrete Cosine Transform

In Image processing the Image compression can improve the performance of the digital systems by reducing the cost and time in image storage and transmission without significant reduction of the Image quality. This paper describes hardware architecture of low complexity Discrete Cosine Transform (DCT) architecture for image compression[6]. In this DCT architecture, common computations are identified and shared to remove redundant computations in DCT matrix operation. Vector processing is a method used for implementation of DCT. This reduction in computational complexity of 2D DCT reduces power consumption. The 2D DCT is performed on 8x8 matrix using two 1-Dimensional Discrete cosine transform blocks and a transposition memory [7]. Inverse discrete cosine transform (IDCT) is performed to obtain the image matrix and reconstruct the original image. The proposed image compression algorithm is comprehended using MATLAB code. The VLSI design of the architecture is implemented Using Verilog HDL. The proposed hardware architecture for image compression employing DCT was synthesized using RTL complier and it was mapped using 180nm standard cells. . The Simulation is done using Modelsim. The simulation results from MATLAB and Verilog HDL are compared. Detailed analysis for power and area was done using RTL compiler from CADENCE. Power consumption of DCT core is reduced to 1.027mW with minimum area[1].

Development of Coronal Field and Solar Wind Components for MHD Interplanetary Simulations

The connection between solar activity and adverse phenomena in the Earth’s environment that can affect space and ground based technologies has spurred interest in Space Weather (SW) research. A great effort has been put on the development of suitable models that can provide advanced forecast of SW events. With the progress in computational technology, it is becoming possible to develop operational large scale physics based models which can incorporate the most important physical processes and domains of the Sun-Earth system. In order to enhance our SW prediction capabilities we are developing advanced numerical tools. With operational requirements in mind, our goal is to develop a modular simulation framework of propagation of the disturbances from the Sun through interplanetary space to the Earth. Here, we report and discuss on the development of coronal field and solar wind components for a large scale MHD code. The model for these components is based on a potential field source surface model and an empirical Wang-Sheeley-Arge solar wind relation. 

The Projection Methods for Computing the Pseudospectra of Large Scale Matrices

The projection methods, usually viewed as the methods for computing eigenvalues, can also be used to estimate pseudospectra. This paper proposes a kind of projection methods for computing the pseudospectra of large scale matrices, including orthogonalization projection method and oblique projection method respectively. This possibility may be of practical importance in applications involving large scale highly nonnormal matrices. Numerical algorithms are given and some numerical experiments illustrate the efficiency of the new algorithms.

Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.

An Economic Analysis of Phu Kradueng National Park

The purposes of this study were as follows to evaluate the economic value of Phu Kradueng National Park by the travel cost method (TCM) and the contingent valuation method (CVM) and to estimate the demand for traveling and the willingness to pay. The data for this study were collected by conducting two large scale surveys on users and non-users. A total of 1,016 users and 1,034 non-users were interviewed. The data were analyzed using multiple linear regression analysis, logistic regression model and the consumer surplus (CS) was the integral of demand function for trips. The survey found, were as follows: 1)Using the travel cost method which provides an estimate of direct benefits to park users, we found that visitors- total willingness to pay per visit was 2,284.57 bath, of which 958.29 bath was travel cost, 1,129.82 bath was expenditure for accommodation, food, and services, and 166.66 bath was consumer surplus or the visitors -net gain or satisfaction from the visit (the integral of demand function for trips). 2) Thai visitors to Phu Kradueng National Park were further willing to pay an average of 646.84 bath per head per year to ensure the continued existence of Phu Kradueng National Park and to preserve their option to use it in the future. 3) Thai non-visitors, on the other hand, are willing to pay an average of 212.61 bath per head per year for the option and existence value provided by the Park. 4) The total economic value of Phu Kradueng National Park to Thai visitors and non-visitors taken together stands today at 9,249.55 million bath per year. 5) The users- average willingness to pay for access to Phu Kradueng National Park rises from 40 bath to 84.66 bath per head per trip for improved services such as road improvement, increased cleanliness, and upgraded information. This paper was needed to investigate of the potential market demand for bio prospecting in Phu Kradueng national Park and to investigate how a larger share of the economic benefits of tourism could be distributed income to the local residents.

An Artificial Neural Network Based Model for Predicting H2 Production Rates in a Sucrose-Based Bioreactor System

The performance of a sucrose-based H2 production in a completely stirred tank reactor (CSTR) was modeled by neural network back-propagation (BP) algorithm. The H2 production was monitored over a period of 450 days at 35±1 ºC. The proposed model predicts H2 production rates based on hydraulic retention time (HRT), recycle ratio, sucrose concentration and degradation, biomass concentrations, pH, alkalinity, oxidation-reduction potential (ORP), acids and alcohols concentrations. Artificial neural networks (ANNs) have an ability to capture non-linear information very efficiently. In this study, a predictive controller was proposed for management and operation of large scale H2-fermenting systems. The relevant control strategies can be activated by this method. BP based ANNs modeling results was very successful and an excellent match was obtained between the measured and the predicted rates. The efficient H2 production and system control can be provided by predictive control method combined with the robust BP based ANN modeling tool.

A General Framework for Modeling Replicated Real-Time Database

There are many issues that affect modeling and designing real-time databases. One of those issues is maintaining consistency between the actual state of the real-time object of the external environment and its images as reflected by all its replicas distributed over multiple nodes. The need to improve the scalability is another important issue. In this paper, we present a general framework to design a replicated real-time database for small to medium scale systems and maintain all timing constrains. In order to extend the idea for modeling a large scale database, we present a general outline that consider improving the scalability by using an existing static segmentation algorithm applied on the whole database, with the intent to lower the degree of replication, enables segments to have individual degrees of replication with the purpose of avoiding excessive resource usage, which all together contribute in solving the scalability problem for DRTDBS.

Students' Perceptions of the Value of the Elements of an Online Learning Environment: An Investigation of Discipline Differences

This paper presents a large scale, quantitative investigation of the impact of discipline differences on the student experience of using an online learning environment (OLE). Based on a representative sample of 2526 respondents, a number of significant differences in the mean rating by broad discipline area of the importance of, and satisfaction with, a range of elements of an OLE were found. Broadly speaking, the Arts and Science and Technology discipline areas reported the lowest importance and satisfaction ratings for the OLE, while the Health and Behavioural Sciences area was the most satisfied with the OLE. A number of specific, systematic discipline differences are reported and discussed. Compared to the observed significant differences in mean importance ratings, there were fewer significant differences in mean satisfaction ratings, and those that were observed were less systematic than for importance ratings.

Stability of Interconnected Systems under Structural Perturbation: Decomposition-Aggregation Approach

In this paper, the decomposition-aggregation method is used to carry out connective stability criteria for general linear composite system via aggregation. The large scale system is decomposed into a number of subsystems. By associating directed graphs with dynamic systems in an essential way, we define the relation between system structure and stability in the sense of Lyapunov. The stability criteria is then associated with the stability and system matrices of subsystems as well as those interconnected terms among subsystems using the concepts of vector differential inequalities and vector Lyapunov functions. Then, we show that the stability of each subsystem and stability of the aggregate model imply connective stability of the overall system. An example is reported, showing the efficiency of the proposed technique.

High Performance Computing Using Out-of- Core Sparse Direct Solvers

In-core memory requirement is a bottleneck in solving large three dimensional Navier-Stokes finite element problem formulations using sparse direct solvers. Out-of-core solution strategy is a viable alternative to reduce the in-core memory requirements while solving large scale problems. This study evaluates the performance of various out-of-core sequential solvers based on multifrontal or supernodal techniques in the context of finite element formulations for three dimensional problems on a Windows platform. Here three different solvers, HSL_MA78, MUMPS and PARDISO are compared. The performance of these solvers is evaluated on a 64-bit machine with 16GB RAM for finite element formulation of flow through a rectangular channel. It is observed that using out-of-core PARDISO solver, relatively large problems can be solved. The implementation of Newton and modified Newton's iteration is also discussed.

Improving Fault Resilience and Reconstruction of Overlay Multicast Tree Using Leaving Time of Participants

Network layer multicast, i.e. IP multicast, even after many years of research, development and standardization, is not deployed in large scale due to both technical (e.g. upgrading of routers) and political (e.g. policy making and negotiation) issues. Researchers looked for alternatives and proposed application/overlay multicast where multicast functions are handled by end hosts, not network layer routers. Member hosts wishing to receive multicast data form a multicast delivery tree. The intermediate hosts in the tree act as routers also, i.e. they forward data to the lower hosts in the tree. Unlike IP multicast, where a router cannot leave the tree until all members below it leave, in overlay multicast any member can leave the tree at any time thus disjoining the tree and disrupting the data dissemination. All the disrupted hosts have to rejoin the tree. This characteristic of the overlay multicast causes multicast tree unstable, data loss and rejoin overhead. In this paper, we propose that each node sets its leaving time from the tree and sends join request to a number of nodes in the tree. The nodes in the tree will reject the request if their leaving time is earlier than the requesting node otherwise they will accept the request. The node can join at one of the accepting nodes. This makes the tree more stable as the nodes will join the tree according to their leaving time, earliest leaving time node being at the leaf of the tree. Some intermediate nodes may not follow their leaving time and leave earlier than their leaving time thus disrupting the tree. For this, we propose a proactive recovery mechanism so that disrupted nodes can rejoin the tree at predetermined nodes immediately. We have shown by simulation that there is less overhead when joining the multicast tree and the recovery time of the disrupted nodes is much less than the previous works. Keywords

The Management in Large Emergency Situations – A Best Practise Case Study based on GIS for Management of Evacuation

In most of the cases, natural disasters lead to the necessity of evacuating people. The quality of evacuation management is dramatically improved by the use of information provided by decision support systems, which become indispensable in case of large scale evacuation operations. This paper presents a best practice case study. In November 2007, officers from the Emergency Situations Inspectorate “Crisana" of Bihor County from Romania participated to a cross-border evacuation exercise, when 700 people have been evacuated from Netherlands to Belgium. One of the main objectives of the exercise was the test of four different decision support systems. Afterwards, based on that experience, software system called TEVAC (Trans Border Evacuation) has been developed “in house" by the experts of this institution. This original software system was successfully tested in September 2008, during the deployment of the international exercise EU-HUROMEX 2008, the scenario involving real evacuation of 200 persons from Hungary to Romania. Based on the lessons learned and results, starting from April 2009, the TEVAC software is used by all Emergency Situations Inspectorates all over Romania.

Power Quality Improvement Using PI and Fuzzy Logic Controllers Based Shunt Active Filter

In recent years the large scale use of the power electronic equipment has led to an increase of harmonics in the power system. The harmonics results into a poor power quality and have great adverse economical impact on the utilities and customers. Current harmonics are one of the most common power quality problems and are usually resolved by using shunt active filter (SHAF). The main objective of this work is to develop PI and Fuzzy logic controllers (FLC) to analyze the performance of Shunt Active Filter for mitigating current harmonics under balanced and unbalanced sinusoidal source voltage conditions for normal load and increased load. When the supply voltages are ideal (balanced), both PI and FLC are converging to the same compensation characteristics. However, the supply voltages are non-ideal (unbalanced), FLC offers outstanding results. Simulation results validate the superiority of FLC with triangular membership function over the PI controller.

Factors of Effective Business Software Systems Development and Enhancement Projects Work Effort Estimation

Majority of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) fail to meet criteria of their effectiveness, what leads to the considerable financial losses. One of the fundamental reasons for such projects- exceptionally low success rate are improperly derived estimates for their costs and time. In the case of BSS D&EP these attributes are determined by the work effort, meanwhile reliable and objective effort estimation still appears to be a great challenge to the software engineering. Thus this paper is aimed at presenting the most important synthetic conclusions coming from the author-s own studies concerning the main factors of effective BSS D&EP work effort estimation. Thanks to the rational investment decisions made on the basis of reliable and objective criteria it is possible to reduce losses caused not only by abandoned projects but also by large scale of overrunning the time and costs of BSS D&EP execution.

3D Network-on-Chip with on-Chip DRAM: An Empirical Analysis for Future Chip Multiprocessor

With the increasing number of on-chip components and the critical requirement for processing power, Chip Multiprocessor (CMP) has gained wide acceptance in both academia and industry during the last decade. However, the conventional bus-based onchip communication schemes suffer from very high communication delay and low scalability in large scale systems. Network-on-Chip (NoC) has been proposed to solve the bottleneck of parallel onchip communications by applying different network topologies which separate the communication phase from the computation phase. Observing that the memory bandwidth of the communication between on-chip components and off-chip memory has become a critical problem even in NoC based systems, in this paper, we propose a novel 3D NoC with on-chip Dynamic Random Access Memory (DRAM) in which different layers are dedicated to different functionalities such as processors, cache or memory. Results show that, by using our proposed architecture, average link utilization has reduced by 10.25% for SPLASH-2 workloads. Our proposed design costs 1.12% less execution cycles than the traditional design on average.

Simulation of Sloshing-Shear Mixed Shallow Water Waves (II) Numerical Solutions

This is the second part of the paper. It, aside from the core subroutine test reported previously, focuses on the simulation of turbulence governed by the full STF Navier-Stokes equations on a large scale. Law of the wall is found plausible in this study as a model of the boundary layer dynamics. Model validations proceed to include velocity profiles of a stationary turbulent Couette flow, pure sloshing flow simulations, and the identification of water-surface inclination due to fluid accelerations. Errors resulting from the irrotational and hydrostatic assumptions are explored when studying a wind-driven water circulation with no shakings. Illustrative examples show that this numerical strategy works for the simulation of sloshing-shear mixed flow in a 3-D rigid rectangular base tank.

An FPGA Implementation of Intelligent Visual Based Fall Detection

Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.

Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196

It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.