Iris Localization using Circle and Fuzzy Circle Detection Method

Iris localization is a very important approach in biometric identification systems. Identification process usually is implemented in three levels: iris localization, feature extraction, and pattern matching finally. Accuracy of iris localization as the first step affects all other levels and this shows the importance of iris localization in an iris based biometric system. In this paper, we consider Daugman iris localization method as a standard method, propose a new method in this field and then analyze and compare the results of them on a standard set of iris images. The proposed method is based on the detection of circular edge of iris, and improved by fuzzy circles and surface energy difference contexts. Implementation of this method is so easy and compared to the other methods, have a rather high accuracy and speed. Test results show that the accuracy of our proposed method is about Daugman method and computation speed of it is 10 times faster.

A Fragile Watermarking Scheme for Color Image Authentication

In this paper, a fragile watermarking scheme is proposed for color image specified object-s authentication. The color image is first transformed from RGB to YST color space, suitable for watermarking the color media. The T channel corresponds to the chrominance component of a color image andYS ÔèÑ T , therefore selected for embedding the watermark. The T channel is first divided into 2×2 non-overlapping blocks and the two LSBs are set to zero. The object that is to be authenticated is also divided into 2×2 nonoverlapping blocks and each block-s intensity mean is computed followed by eight bit encoding. The generated watermark is then embedded into T channel randomly selected 2×2 block-s LSBs using 2D-Torus Automorphism. Selection of block size is paramount for exact localization and recovery of work. The proposed scheme is blind, efficient and secure with ability to detect and locate even minor tampering applied to the image with full recovery of original work. The quality of watermarked media is quite high both subjectively and objectively. The technique is suitable for class of images with format such as gif, tif or bitmap.

Fast Segmentation for the Piecewise Smooth Mumford-Shah Functional

This paper is concerned with an improved algorithm based on the piecewise-smooth Mumford and Shah (MS) functional for an efficient and reliable segmentation. In order to speed up convergence, an additional force, at each time step, is introduced further to drive the evolution of the curves instead of only driven by the extensions of the complementary functions u + and u - . In our scheme, furthermore, the piecewise-constant MS functional is integrated to generate the extra force based on a temporary image that is dynamically created by computing the union of u + and u - during segmenting. Therefore, some drawbacks of the original algorithm, such as smaller objects generated by noise and local minimal problem also are eliminated or improved. The resulting algorithm has been implemented in Matlab and Visual Cµ, and demonstrated efficiently by several cases.

Texture Feature-Based Language Identification Using Wavelet-Domain BDIP and BVLC Features and FFT Feature

In this paper, we propose a texture feature-based language identification using wavelet-domain BDIP (block difference of inverse probabilities) and BVLC (block variance of local correlation coefficients) features and FFT (fast Fourier transform) feature. In the proposed method, wavelet subbands are first obtained by wavelet transform from a test image and denoised by Donoho-s soft-thresholding. BDIP and BVLC operators are next applied to the wavelet subbands. FFT blocks are also obtained by 2D (twodimensional) FFT from the blocks into which the test image is partitioned. Some significant FFT coefficients in each block are selected and magnitude operator is applied to them. Moments for each subband of BDIP and BVLC and for each magnitude of significant FFT coefficients are then computed and fused into a feature vector. In classification, a stabilized Bayesian classifier, which adopts variance thresholding, searches the training feature vector most similar to the test feature vector. Experimental results show that the proposed method with the three operations yields excellent language identification even with rather low feature dimension.

MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification

This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.

Simulated Annealing and Genetic Algorithm in Telecommunications Network Planning

The main goal of this work is to propose a way for combined use of two nontraditional algorithms by solving topological problems on telecommunications concentrator networks. The algorithms suggested are the Simulated Annealing algorithm and the Genetic Algorithm. The Algorithm of Simulated Annealing unifies the well known local search algorithms. In addition - Simulated Annealing allows acceptation of moves in the search space witch lead to decisions with higher cost in order to attempt to overcome any local minima obtained. The Genetic Algorithm is a heuristic approach witch is being used in wide areas of optimization works. In the last years this approach is also widely implemented in Telecommunications Networks Planning. In order to solve less or more complex planning problem it is important to find the most appropriate parameters for initializing the function of the algorithm.

Comparing and Combining the Axial with the Network Maps for Analyzing Urban Street Pattern

Rooted in the study of social functioning of space in architecture, Space Syntax (SS) and the more recent Network Pattern (NP) researches demonstrate the 'spatial structures' of city, i.e. the hierarchical patterns of streets, junctions and alley ends. Applying SS and NP models, planners can conceptualize the real city-s patterns. Although, both models yield the optimal path of the city their underpinning displays of the city-s spatial configuration differ. The Axial Map analyzes the topological non-distance-based connectivity structure, whereas, the Central-Node Map and the Shortcut-Path Map, in contrast, analyze the metrical distance-based structures. This research contrasts and combines them to understand various forms of city-s structures. It concludes that, while they reveal different spatial structures, Space Syntax and Network Pattern urban models support each the other. Combining together they simulate the global access and the locally compact structures namely the central nodes and the shortcuts for the city.

Estimating Spatial Disaggregation of Urban Thermal Responsiveness on Summer Diurnal Range with a Numerical Modeling Approach in Bangkok, Thailand

Facing the concern of the population to its environment and to climatic change, city planners are now considering the urban climate in their choices of planning. The urban climate, representing different urban morphologies across central Bangkok metropolitan area (BMA), are used to investigates the effects of both the composition and configuration of variables of urban morphology indicators on the summer diurnal range of urban climate, using correlation analyses and multiple linear regressions. Results show first indicate that approximately 92.6% of the variation in the average maximum daytime near-surface air temperature (Ta) was explained jointly by the two composition variables of urban morphology indicators including open space ratio (OSR) and floor area ratio (FAR). It has been possible to determine the membership of sample areas to the local climate zones (LCZs) using these urban morphology descriptors automatically computed with GIS and remote sensed data. Finally result found the temperature differences among zones of large separation, such as the city center could be respectively from 35.48±1.04ºC (Mean±S.D.) warmer than the outskirt of Bangkok on average for maximum daytime near surface temperature to 28.27±0.21ºC for extreme event and, can exceed as 8ºC. A spatially disaggregation of urban thermal responsiveness map would be helpful for several reasons. First, it would localize urban areas concerned by different climate behavior over summer daytime and be a good indicator of urban climate variability. Second, when overlaid with a land cover map, this map may contribute to identify possible urban management strategies to reduce heat wave effects in BMA.

Toward Full Public E-Service Environment in Developing Countries

Changing technology and increased constituent demand for government services derive the need for governmental responsiveness. The government organisations in the developing countries will be under increased pressure to change their bureaucratic systems to be able to respond rapidly to changing and increasing requirements and rapid technology advancements. This paper aims to present a conceptual framework for explaining the main barriers and drivers of public e-service development. Therefore, the framework provides a basic context within which the process and practice of E-Service can be implemented successfully in the public sector organisations. The framework is flexible enough to be adopted by governments at different levels; national or local by developing countries around the world.

Springback Investigation on Sheet Metal Incremental Formed Parts

Incremental forming is a complex forming process with continuously local cumulative deformation taking place during its process, and springback that forming quality affected by would occur. The springback evaluation method based on forming error compensation also was proposed, which it can be defined as the difference between theory and the actual amount of compensation along the measured direction. According to forming error compensation evaluation method, experiments was designed and implemented. And from the results that obtained it can be show, the magnitude of springback average (δE) of formed parts was very small, and the forming precision could be significantly improved by adopting compensation method. Based on double tensile stress state in the main deformation area, a hypothesis that there is little springback be arisen by bending behavior on the formed parts that was proposed.

Dynamic Attribute Dependencies in Relational Attribute Grammars

Considering the theory of attribute grammars, we use logical formulas instead of traditional functional semantic rules. Following the decoration of a derivation tree, a suitable algorithm should maintain the consistency of the formulas together with the evaluation of the attributes. This may be a Prolog-like resolution, but this paper examines a somewhat different strategy, based on production specialization, local consistency and propagation: given a derivation tree, it is interactively decorated, i.e. incrementally checked and evaluated. The non-directed dependencies are dynamically directed during attribute evaluation.

Identification of an Mechanism Systems by Using the Modified PSO Method

This paper mainly proposes an efficient modified particle swarm optimization (MPSO) method, to identify a slidercrank mechanism driven by a field-oriented PM synchronous motor. In system identification, we adopt the MPSO method to find parameters of the slider-crank mechanism. This new algorithm is added with “distance" term in the traditional PSO-s fitness function to avoid converging to a local optimum. It is found that the comparisons of numerical simulations and experimental results prove that the MPSO identification method for the slider-crank mechanism is feasible.

2D Validation of a High-order Adaptive Cartesian-grid finite-volume Characteristic- flux Model with Embedded Boundaries

A Finite Volume method based on Characteristic Fluxes for compressible fluids is developed. An explicit cell-centered resolution is adopted, where second and third order accuracy is provided by using two different MUSCL schemes with Minmod, Sweby or Superbee limiters for the hyperbolic part. Few different times integrator is used and be describe in this paper. Resolution is performed on a generic unstructured Cartesian grid, where solid boundaries are handled by a Cut-Cell method. Interfaces are explicitely advected in a non-diffusive way, ensuring local mass conservation. An improved cell cutting has been developed to handle boundaries of arbitrary geometrical complexity. Instead of using a polygon clipping algorithm, we use the Voxel traversal algorithm coupled with a local floodfill scanline to intersect 2D or 3D boundary surface meshes with the fixed Cartesian grid. Small cells stability problem near the boundaries is solved using a fully conservative merging method. Inflow and outflow conditions are also implemented in the model. The solver is validated on 2D academic test cases, such as the flow past a cylinder. The latter test cases are performed both in the frame of the body and in a fixed frame where the body is moving across the mesh. Adaptive Cartesian grid is provided by Paramesh without complex geometries for the moment.

Future Housing Energy Efficiency Associated with the Auckland Unitary Plan

The draft Auckland Unitary Plan outlines the future land used for new housing and businesses with Auckland population growth over the next thirty years. According to Auckland Unitary Plan, over the next 30 years, the population of Auckland is projected to increase by one million, and up to 70% of total new dwellings occur within the existing urban area. Intensification will not only increase the number of median or higher density houses such as terrace house, apartment building, etc. within the existing urban area but also change mean housing design data that can impact building thermal performance under the local climate. Based on mean energy consumption and building design data, and their relationships of a number of Auckland sample houses, this study is to estimate the future mean housing energy consumption associated with the change of mean housing design data and evaluate housing energy efficiency with the Auckland Unitary Plan.

Innovation, e-Learning and Higher Education: An Example of a University- LMS Adoption Process

The evolution of ICT has changed all sections of society and these changes have been creating an irreversible impact on higher education institutions, which are expected to adopt innovative technologies in their teaching practices. As theorical framework this study select Rogers theory of innovation diffusion which is widely used to illustrate how technologies move from a localized invented to a widespread evolution on organizational practices. Based on descriptive statistical data collected in a European higher education institution three years longitudinal study was conducted for analyzing and discussion the different stages of a LMS adoption process. Results show that ICT integration in higher education is not progressively successful and a linear process and multiple aspects must be taken into account.

Stochastic Learning Algorithms for Modeling Human Category Learning

Most neural network (NN) models of human category learning use a gradient-based learning method, which assumes that locally-optimal changes are made to model parameters on each learning trial. This method tends to under predict variability in individual-level cognitive processes. In addition many recent models of human category learning have been criticized for not being able to replicate rapid changes in categorization accuracy and attention processes observed in empirical studies. In this paper we introduce stochastic learning algorithms for NN models of human category learning and show that use of the algorithms can result in (a) rapid changes in accuracy and attention allocation, and (b) different learning trajectories and more realistic variability at the individual-level.

Numerical Studies of Galerkin-type Time-discretizations Applied to Transient Convection-diffusion-reaction Equations

We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.

Continuous Feature Adaptation for Non-Native Speech Recognition

The current speech interfaces in many military applications may be adequate for native speakers. However, the recognition rate drops quite a lot for non-native speakers (people with foreign accents). This is mainly because the nonnative speakers have large temporal and intra-phoneme variations when they pronounce the same words. This problem is also complicated by the presence of large environmental noise such as tank noise, helicopter noise, etc. In this paper, we proposed a novel continuous acoustic feature adaptation algorithm for on-line accent and environmental adaptation. Implemented by incremental singular value decomposition (SVD), the algorithm captures local acoustic variation and runs in real-time. This feature-based adaptation method is then integrated with conventional model-based maximum likelihood linear regression (MLLR) algorithm. Extensive experiments have been performed on the NATO non-native speech corpus with baseline acoustic model trained on native American English. The proposed feature-based adaptation algorithm improved the average recognition accuracy by 15%, while the MLLR model based adaptation achieved 11% improvement. The corresponding word error rate (WER) reduction was 25.8% and 2.73%, as compared to that without adaptation. The combined adaptation achieved overall recognition accuracy improvement of 29.5%, and WER reduction of 31.8%, as compared to that without adaptation.

Numerical Solution of Second-Order Ordinary Differential Equations by Improved Runge-Kutta Nystrom Method

In this paper we developed the Improved Runge-Kutta Nystrom (IRKN) method for solving second order ordinary differential equations. The methods are two step in nature and require lower number of function evaluations per step compared with the existing Runge-Kutta Nystrom (RKN) methods. Therefore, the methods are computationally more efficient at achieving the higher order of local accuracy. Algebraic order conditions of the method are obtained and the third and fourth order method are derived with two and three stages respectively. The numerical results are given to illustrate the efficiency of the proposed method compared to the existing RKN methods.

Computer Based Medicine: I - The Future

With the rapid growth in business size, today-s businesses orient Throughout thirty years local, national and international experience in medicine as a medical student, junior doctor and eventually Consultant and Professor in Anaesthesia, Intensive Care and Pain Management, I note significant generalised dissatisfaction among medical students and doctors regarding their medical education and practice. We repeatedly hear complaints from patients about the dysfunctional health care system they are dealing with and subsequently the poor medical service that they are receiving. Medical students are bombarded with lectures, tutorials, clinical rounds and various exams. Clinicians are weighed down with a never-ending array of competing duties. Patients are extremely unhappy about the long waiting lists, loss of their records and the continuous deterioration of the health care service. This problem has been reported in different countries by several authors [1,2,3]. In a trial to solve this dilemma, a genuine idea has been suggested implementing computer technology in medicine [2,3]. Computers in medicine are a medium of international communication of the revolutionary advances being made in the application of the computer to the fields of bioscience and medicine [4,5]. The awareness about using computers in medicine has recently increased all over the world. In Misr University for Science & Technology (MUST), Egypt, medical students are now given hand-held computers (Laptop) with Internet facility making their medical education accessible, convenient and up to date. However, this trial still needs to be validated. Helping the readers to catch up with the on going fast development in this interesting field, the author has decided to continue reviewing the literature, exploring the state-of-art in computer based medicine and up dating the medical professionals especially the local trainee Doctors in Egypt. In part I of this review article we will give a general background discussing the potential use of computer technology in the various aspects of the medical field including education, research, clinical practice and the health care service given to patients. Hope this will help starting changing the culture, promoting the awareness about the importance of implementing information technology (IT) in medicine, which is a field in which such help is needed. An international collaboration is recommended supporting the emerging countries achieving this target.