E-Government in Transition Economies

This paper deals with e-government issues at several levels. Initially we look at the concept of e-government itself in order to give it a sound framework. Than we look at the e-government issues at three levels, first we analyse it at the global level, second we analyse it at the level of transition economies, and finally we take a closer look on developments in Croatia. The analysis includes actual progress being made in selected transition economies given the Euro area averages, along with e-government potential in future demanding period.

Haar Wavelet Method for Solving Fitz Hugh-Nagumo Equation

In this paper, we develop an accurate and efficient Haar wavelet method for well-known FitzHugh-Nagumo equation. The proposed scheme can be used to a wide class of nonlinear reaction-diffusion equations. The power of this manageable method is confirmed. Moreover the use of Haar wavelets is found to be accurate, simple, fast, flexible, convenient, small computation costs and computationally attractive.

Gas Detonation Forming by a Mixture of H2+O2 Detonation

Explosive forming is one of the unconventional techniques in which, most commonly, the water is used as the pressure transmission medium. One of the newest methods in explosive forming is gas detonation forming which uses a normal shock wave derived of gas detonation, to form sheet metals. For this purpose a detonation is developed from the reaction of H2+O2 mixture in a long cylindrical detonation tube. The detonation wave goes through the detonation tube and acts as a blast load on the steel blank and forms it. Experimental results are compared with a finite element model; and the comparison of the experimental and numerical results obtained from strain, thickness variation and deformed geometry is carried out. Numerical and experimental results showed approximately 75 – 90 % similarity in formability of desired shape. Also optimum percent of gas mixture obtained when we mix 68% H2 with 32% O2.

Web Application to Profiling Scientific Institutions through Citation Mining

Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.

A Hybrid Recommender System based on Collaborative Filtering and Cloud Model

User-based Collaborative filtering (CF), one of the most prevailing and efficient recommendation techniques, provides personalized recommendations to users based on the opinions of other users. Although the CF technique has been successfully applied in various applications, it suffers from serious sparsity problems. The cloud-model approach addresses the sparsity problems by constructing the user-s global preference represented by a cloud eigenvector. The user-based CF approach works well with dense datasets while the cloud-model CF approach has a greater performance when the dataset is sparse. In this paper, we present a hybrid approach that integrates the predictions from both the user-based CF and the cloud-model CF approaches. The experimental results show that the proposed hybrid approach can ameliorate the sparsity problem and provide an improved prediction quality.

Characteristics Analysis of Voltage Sag and Voltage Swell in Multi-Grounded Four-Wire Power Distribution Systems

In North America, Most power distribution systems employ a four-wire multi-grounded neutral (MGN) design. This paper has explained the inherent characteristics of multi-grounded three-phase four-wire distribution systems under unbalanced situations. As a result, the mechanism of voltage swell and voltage sag in MGN feeders becomes difficult to understand. The simulation tool that has been used in this paper is MATLAB under Windows software. In this paper the equivalent model of a full-scale multigrounded distribution system implemented by MATLAB is introduced. The results are expected to help utility engineers to understand the impact of MGN on distribution system operations.

A Serializability Condition for Multi-step Transactions Accessing Ordered Data

In mobile environments, unspecified numbers of transactions arrive in continuous streams. To prove correctness of their concurrent execution a method of modelling an infinite number of transactions is needed. Standard database techniques model fixed finite schedules of transactions. Lately, techniques based on temporal logic have been proposed as suitable for modelling infinite schedules. The drawback of these techniques is that proving the basic serializability correctness condition is impractical, as encoding (the absence of) conflict cyclicity within large sets of transactions results in prohibitively large temporal logic formulae. In this paper, we show that, under certain common assumptions on the graph structure of data items accessed by the transactions, conflict cyclicity need only be checked within all possible pairs of transactions. This results in formulae of considerably reduced size in any temporal-logic-based approach to proving serializability, and scales to arbitrary numbers of transactions.

Speed -Sensorless Vector Control of Parallel Connected Induction Motor Drive Fed by a Single Inverter using Natural Observer

This paper describes the speed sensorless vector control method of the parallel connected induction motor drive fed by a single inverter. Speed and rotor fluxes of the induction motor are estimated by natural observer with load torque adaptation and adaptive rotor flux observer. The performance parameters speed and rotor fluxes are estimated from the measured terminal voltages and currents. Fourth order induction motor model is used and speed is considered as a parameter. The performance of the natural observer is similar to the conventional observer. The speed of an induction motor is estimated by MATLAB simulation under different speed and load conditions. Estimated values along with other measured states are used for closed loop control. The simulation results show that the natural observer is also effective for parallel connected induction motor drive.

Shape Restoration of the Left Ventricle

This paper describes an automatic algorithm to restore the shape of three-dimensional (3D) left ventricle (LV) models created from magnetic resonance imaging (MRI) data using a geometry-driven optimization approach. Our basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration. A geometrical measure known as the Minimum Principle Curvature (κ2) is used to assess the smoothness of the LV. This measure is used to construct the objective function of a two-step optimization process. The objective of the optimization is to achieve a smooth epicardial shape by iterative in-plane translation of the MRI slices. Quantitatively, this yields a minimum sum in terms of the magnitude of κ 2, when κ2 is negative. A limited memory quasi-Newton algorithm, L-BFGS-B, is used to solve the optimization problem. We tested our algorithm on an in vitro theoretical LV model and 10 in vivo patient-specific models which contain significant motion artifacts. The results show that our method is able to automatically restore the shape of LV models back to smoothness without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings.

Revisiting the Concept of Risk Analysis within the Context of Geospatial Database Design: A Collaborative Framework

The aim of this research is to design a collaborative framework that integrates risk analysis activities into the geospatial database design (GDD) process. Risk analysis is rarely undertaken iteratively as part of the present GDD methods in conformance to requirement engineering (RE) guidelines and risk standards. Accordingly, when risk analysis is performed during the GDD, some foreseeable risks may be overlooked and not reach the output specifications especially when user intentions are not systematically collected. This may lead to ill-defined requirements and ultimately in higher risks of geospatial data misuse. The adopted approach consists of 1) reviewing risk analysis process within the scope of RE and GDD, 2) analyzing the challenges of risk analysis within the context of GDD, and 3) presenting the components of a risk-based collaborative framework that improves the collection of the intended/forbidden usages of the data and helps geo-IT experts to discover implicit requirements and risks.

Satellite Data Classification Accuracy Assessment Based from Reference Dataset

In order to develop forest management strategies in tropical forest in Malaysia, surveying the forest resources and monitoring the forest area affected by logging activities is essential. There are tremendous effort has been done in classification of land cover related to forest resource management in this country as it is a priority in all aspects of forest mapping using remote sensing and related technology such as GIS. In fact classification process is a compulsory step in any remote sensing research. Therefore, the main objective of this paper is to assess classification accuracy of classified forest map on Landsat TM data from difference number of reference data (200 and 388 reference data). This comparison was made through observation (200 reference data), and interpretation and observation approaches (388 reference data). Five land cover classes namely primary forest, logged over forest, water bodies, bare land and agricultural crop/mixed horticultural can be identified by the differences in spectral wavelength. Result showed that an overall accuracy from 200 reference data was 83.5 % (kappa value 0.7502459; kappa variance 0.002871), which was considered acceptable or good for optical data. However, when 200 reference data was increased to 388 in the confusion matrix, the accuracy slightly improved from 83.5% to 89.17%, with Kappa statistic increased from 0.7502459 to 0.8026135, respectively. The accuracy in this classification suggested that this strategy for the selection of training area, interpretation approaches and number of reference data used were importance to perform better classification result.

Utilization of Advanced Data Storage Technology to Conduct Construction Industry on Clear Environment

Construction projects generally take place in uncontrolled and dynamic environments where construction waste is a serious environmental problem in many large cities. The total amount of waste and carbon dioxide emissions from transportation vehicles are still out of control due to increasing construction projects, massive urban development projects and the lack of effective tools for minimizing adverse environmental impacts in construction. This research is about utilization of the integrated applications of automated advanced tracking and data storage technologies in the area of environmental management to monitor and control adverse environmental impacts such as construction waste and carbon dioxide emissions. Radio Frequency Identification (RFID) integrated with the Global Position System (GPS) provides an opportunity to uniquely identify materials, components, and equipments and to locate and track them using minimal or no worker input. The transmission of data to the central database will be carried out with the help of Global System for Mobile Communications (GSM).

A Thought on Exotic Statistical Distributions

The statistical distributions are modeled in explaining nature of various types of data sets. Although these distributions are mostly uni-modal, it is quite common to see multiple modes in the observed distribution of the underlying variables, which make the precise modeling unrealistic. The observed data do not exhibit smoothness not necessarily due to randomness, but could also be due to non-randomness resulting in zigzag curves, oscillations, humps etc. The present paper argues that trigonometric functions, which have not been used in probability functions of distributions so far, have the potential to take care of this, if incorporated in the distribution appropriately. A simple distribution (named as, Sinoform Distribution), involving trigonometric functions, is illustrated in the paper with a data set. The importance of trigonometric functions is demonstrated in the paper, which have the characteristics to make statistical distributions exotic. It is possible to have multiple modes, oscillations and zigzag curves in the density, which could be suitable to explain the underlying nature of select data set.

A Microcontroller Implementation of Model Predictive Control

Model Predictive Control (MPC) is increasingly being proposed for real time applications and embedded systems. However comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprises as well as a transformer of organizations and markets. Recently, advances in microelectronics and software allow such technique to be implemented in embedded systems. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In fact in this paper, we propose an efficient framework for implementation of Generalized Predictive Control (GPC) in the performed STM32 microcontroller. The STM32 keil starter kit based on a JTAG interface and the STM32 board was used to implement the proposed GPC firmware. Besides the GPC, the PID anti windup algorithm was also implemented using Keil development tools designed for ARM processor-based microcontroller devices and working with C/Cµ langage. A performances comparison study was done between both firmwares. This performances study show good execution speed and low computational burden. These results encourage to develop simple predictive algorithms to be programmed in industrial standard hardware. The main features of the proposed framework are illustrated through two examples and compared with the anti windup PID controller.

A Block World Problem Based Sudoku Solver

There are many approaches proposed for solving Sudoku puzzles. One of them is by modelling the puzzles as block world problems. There have been three model for Sudoku solvers based on this approach. Each model expresses Sudoku solver as a parameterized multi agent systems. In this work, we propose a new model which is an improvement over the existing models. This paper presents the development of a Sudoku solver that implements all the proposed models. Some experiments have been conducted to determine the performance of each model.

Health Risk Assessment in Lead Battery Smelter Factory: A Bayesian Belief Network Method

This paper proposes the use of Bayesian belief networks (BBN) as a higher level of health risk assessment for a dumping site of lead battery smelter factory. On the basis of the epidemiological studies, the actual hospital attendance records and expert experiences, the BBN is capable of capturing the probabilistic relationships between the hazardous substances and their adverse health effects, and accordingly inferring the morbidity of the adverse health effects. The provision of the morbidity rates of the related diseases is more informative and can alleviate the drawbacks of conventional methods.

Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems

In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.

Heat Exchanger Design

This paper is intended to assist anyone with some general technical experience, but perhaps limited specific knowledge of heat transfer equipment. A characteristic of heat exchanger design is the procedure of specifying a design, heat transfer area and pressure drops and checking whether the assumed design satisfies all requirements or not. The purpose of this paper is how to design the oil cooler (heat exchanger) especially for shell-and-tube heat exchanger which is the majority type of liquid-to-liquid heat exchanger. General design considerations and design procedure are also illustrated in this paper and a flow diagram is provided as an aid of design procedure. In design calculation, the MatLAB and AutoCAD software are used. Fundamental heat transfer concepts and complex relationships involved in such exchanger are also presented in this paper. The primary aim of this design is to obtain a high heat transfer rate without exceeding the allowable pressure drop. This computer program is highly useful to design the shell-and-tube type heat exchanger and to modify existing deign.

A Proposal for Federation Technology for Authenticated Information between Terminals

Recently, various services such as television and the Internet have come to be received through various terminals. However, we could gain greater convenience by receiving these services through cellular phone terminals when we go out and then continuing to receive the same services through a large screen digital television after we have come home. However, it is necessary to go through the same authentication processing again when using TVs after we have come home. In this study, we have developed an authentication method that enables users to switch terminals in environments in which the user receives service from a server through a terminal. Specifically, the method simplifies the authentication of the server side when switching from one terminal to another terminal by using previously authenticated information.

High Performance VLSI Architecture of 2D Discrete Wavelet Transform with Scalable Lattice Structure

In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.