A Unity Gain Fully-Differential 10bit and 40MSps Sample-And-Hold Amplifier in 0.18um CMOS

A 10bit, 40 MSps, sample and hold, implemented in 0.18-μm CMOS technology with 3.3V supply, is presented for application in the front-end stage of an analog-to-digital converter. Topology selection, biasing, compensation and common mode feedback are discussed. Cascode technique has been used to increase the dc gain. The proposed opamp provides 149MHz unity-gain bandwidth (wu), 80 degree phase margin and a differential peak to peak output swing more than 2.5v. The circuit has 55db Total Harmonic Distortion (THD), using the improved fully differential two stage operational amplifier of 91.7dB gain. The power dissipation of the designed sample and hold is 4.7mw. The designed system demonstrates relatively suitable response in different process, temperature and supply corners (PVT corners).

High Performance Computing Using Out-of- Core Sparse Direct Solvers

In-core memory requirement is a bottleneck in solving large three dimensional Navier-Stokes finite element problem formulations using sparse direct solvers. Out-of-core solution strategy is a viable alternative to reduce the in-core memory requirements while solving large scale problems. This study evaluates the performance of various out-of-core sequential solvers based on multifrontal or supernodal techniques in the context of finite element formulations for three dimensional problems on a Windows platform. Here three different solvers, HSL_MA78, MUMPS and PARDISO are compared. The performance of these solvers is evaluated on a 64-bit machine with 16GB RAM for finite element formulation of flow through a rectangular channel. It is observed that using out-of-core PARDISO solver, relatively large problems can be solved. The implementation of Newton and modified Newton's iteration is also discussed.

Paradigm of Relocation of Urban Poor Habitats (Slums): Case Study of Nagpur City

Developing countries are facing a problem of slums and there appears to be no fool proof solution to eradicate them. For improving the quality of life there are three approaches of slum development and In-situ up-gradation approach is found to be the best one, while the relocation approach has proved to be failure. Factors responsible for failure of relocation projects are needed to be assessed, which is the basic aim of the paper. Factors responsible for failure of relocation projects are loss of livelihood, security of tenure and inefficiency of the Government. These factors are traced out & mapped from the examples of Western & Indian cities. National habitat, Resettlement policy emphasized relationship between shelter and work place. SRA has identified 55 slums for relocation due reservation of land uses, security of tenure and non- notified status of slums. The policy guidelines have been suggested for successful relocation projects. KeywordsLivelihood, Relocation, Slums, Urban poor.

Mobile Phone as a Tool for Data Collection in Field Research

The necessity of accurate and timely field data is shared among organizations engaged in fundamentally different activities, public services or commercial operations. Basically, there are three major components in the process of the qualitative research: data collection, interpretation and organization of data, and analytic process. Representative technological advancements in terms of innovation have been made in mobile devices (mobile phone, PDA-s, tablets, laptops, etc). Resources that can be potentially applied on the data collection activity for field researches in order to improve this process. This paper presents and discuss the main features of a mobile phone based solution for field data collection, composed of basically three modules: a survey editor, a server web application and a client mobile application. The data gathering process begins with the survey creation module, which enables the production of tailored questionnaires. The field workforce receives the questionnaire(s) on their mobile phones to collect the interviews responses and sending them back to a server for immediate analysis.

A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

A Hybrid Ontology Based Approach for Ranking Documents

Increasing growth of information volume in the internet causes an increasing need to develop new (semi)automatic methods for retrieval of documents and ranking them according to their relevance to the user query. In this paper, after a brief review on ranking models, a new ontology based approach for ranking HTML documents is proposed and evaluated in various circumstances. Our approach is a combination of conceptual, statistical and linguistic methods. This combination reserves the precision of ranking without loosing the speed. Our approach exploits natural language processing techniques to extract phrases from documents and the query and doing stemming on words. Then an ontology based conceptual method will be used to annotate documents and expand the query. To expand a query the spread activation algorithm is improved so that the expansion can be done flexible and in various aspects. The annotated documents and the expanded query will be processed to compute the relevance degree exploiting statistical methods. The outstanding features of our approach are (1) combining conceptual, statistical and linguistic features of documents, (2) expanding the query with its related concepts before comparing to documents, (3) extracting and using both words and phrases to compute relevance degree, (4) improving the spread activation algorithm to do the expansion based on weighted combination of different conceptual relationships and (5) allowing variable document vector dimensions. A ranking system called ORank is developed to implement and test the proposed model. The test results will be included at the end of the paper.

Losses Analysis in TEP Considering Uncertainity in Demand by DPSO

This paper presents a mathematical model and a methodology to analyze the losses in transmission expansion planning (TEP) under uncertainty in demand. The methodology is based on discrete particle swarm optimization (DPSO). DPSO is a useful and powerful stochastic evolutionary algorithm to solve the large-scale, discrete and nonlinear optimization problems like TEP. The effectiveness of the proposed idea is tested on an actual transmission network of the Azerbaijan regional electric company, Iran. The simulation results show that considering the losses even for transmission expansion planning of a network with low load growth is caused that operational costs decreases considerably and the network satisfies the requirement of delivering electric power more reliable to load centers.

Empirical Analyses of Determinants of D.J.S.I.US Mean Returns

This study investigates the relationship between 10 year bond value, Yen/U.S dollar exchange rate, non-farm payrolls (all employs) and crude oil to U.S. Dow Jones Sustainability Index. A GARCH model is used to test these relationships for the period January 1st 1999 to January 31st 2008 using monthly data. Results show that an increase of the 10 year bond and non farm payrolls (all employs) lead to an increase of the D.J.S.I returns. On the contrary the volatility of the Yen/U.S dollar exchange rates as well as the increase of crude oil returns has negative effects on the U.S D.J.S.I returns. This study aims at assisting investors to understand the influences certain macroeconomic indicators have on the companies- stock returns as reported by the D.J.S.I.

Numerical Simulation of the Liquid-Vapor Interface Evolution with Material Properties

A satured liquid is warmed until boiling in a parallelepipedic boiler. The heat is supplied in a liquid through the horizontal bottom of the boiler, the other walls being adiabatic. During the process of boiling, the liquid evaporates through its free surface by deforming it. This surface which subdivides the boiler into two regions occupied on both sides by the boiled liquid (broth) and its vapor which surmounts it. The broth occupying the region and its vapor the superior region. A two- fluids model is used to describe the dynamics of the broth, its vapor and their interface. In this model, the broth is treated as a monophasic fluid (homogeneous model) and form with its vapor adiphasic pseudo fluid (two-fluid model). Furthermore, the interface is treated as a zone of mixture characterized by superficial void fraction noted α* . The aim of this article is to describe the dynamics of the interface between the boiled fluid and its vapor within a boiler. The resolution of the problem allowed us to show the evolution of the broth and the level of the liquid.

Influence of Hydrocarbons on Plant Cell Ultrastructure and Main Metabolic Enzymes

Influence of octane and benzene on plant cell ultrastructure and enzymes of basic metabolism, such as nitrogen assimilation and energy generation have been studied. Different plants: perennial ryegrass (Lolium perenne) and alfalfa (Medicago sativa); crops- maize (Zea mays L.) and bean (Phaseolus vulgaris); shrubs – privet (Ligustrum sempervirens) and trifoliate orange (Poncirus trifoliate); trees - poplar (Populus deltoides) and white mulberry (Morus alba L.) were exposed to hydrocarbons of different concentrations (1, 10 and 100 mM). Destructive changes in bean and maize leaves cells ultrastructure under the influence of benzene vapour were revealed at the level of photosynthetic and energy generation subcellular organells. Different deviations at the level of subcellular organelles structure and distribution were observed in alfalfa and ryegrass root cells under the influence of benzene and octane, absorbed through roots. The level of destructive changes is concentration dependent. Benzene at low 1 and 10 mM concentration caused the increase in glutamate dehydrogenase (GDH) activity in maize roots and leaves and in poplar and mulberry shoots, though to higher extent in case of lower, 1mM concentration. The induction was more intensive in plant roots. The highest tested 100mM concentration of benzene was inhibitory to the enzyme in all plants. Octane caused induction of GDH in all grassy plants at all tested concentrations; however the rate of induction decreased parallel to increase of the hydrocarbon concentration. Octane at concentration 1 mM caused induction of GDH in privet, trifoliate and white mulberry shoots. The highest, 100mM octane was characterized by inhibitory effect to GDH activity in all plants. Octane had inductive effect on malate dehydrogenase in almost all plants and tested concentrations, indicating the intensification of Trycarboxylic Acid Cycle. The data could be suggested for elaboration of criteria for plant selection for phytoremediation of oil hydrocarbons contaminated soils.

Spatial Variability in Human Development Patterns in Assiut, Egypt

Given the motivation of maps impact in enhancing the perception of the quality of life in a region, this work examines the use of spatial analytical techniques in exploring the role of space in shaping human development patterns in Assiut governorate. Variations of human development index (HDI) of the governorate-s villages, districts and cities are mapped using geographic information systems (GIS). Global and local spatial autocorrelation measures are employed to assess the levels of spatial dependency in the data and to map clusters of human development. Results show prominent disparities in HDI between regions of Assiut. Strong patterns of spatial association were found proving the presence of clusters on the distribution of HDI. Finally, the study indicates several "hot-spots" in the governorate to be area of more investigations to explore the attributes of such levels of human development. This is very important for accomplishing the development plan of poorest regions currently adopted in Egypt.

Community Innovation in Sustainable Development: A Cross Case Study

Although in sustainable development field, innovative solutions have been sought worldwide by environmental groups, academia, governments and companies for many years, recently, citizens and communities have emerged as a new group and taken more and more active role in this field. Many scholars call for more research on the role of community and community innovation in sustainable development. This paper is to respond to the calls. In this paper, we first summarize a comprehensive set of innovation principles. Then, we do a qualitative cross case study by comparing three community innovation cases in three different areas of sustainable development according to the innovation principles. Finally, we summarize the case comparison and discuss the implications to sustainable development. A unified role model and innovation distribution map of community innovation are developed to better understand community innovation in sustainable development..

Semi-Automatic Artifact Rejection Procedure Based on Kurtosis, Renyi's Entropy and Independent Component Scalp Maps

Artifact rejection plays a key role in many signal processing applications. The artifacts are disturbance that can occur during the signal acquisition and that can alter the analysis of the signals themselves. Our aim is to automatically remove the artifacts, in particular from the Electroencephalographic (EEG) recordings. A technique for the automatic artifact rejection, based on the Independent Component Analysis (ICA) for the artifact extraction and on some high order statistics such as kurtosis and Shannon-s entropy, was proposed some years ago in literature. In this paper we try to enhance this technique proposing a new method based on the Renyi-s entropy. The performance of our method was tested and compared to the performance of the method in literature and the former proved to outperform the latter.

Adsorption of H2 and CO on Iron-based Catalysts for Fischer-Tropsch Synthesis

The adsorption properties of CO and H2 on iron-based catalyst with addition of Zr and Ni were investigated using temperature programmed desorption process. It was found that on the carburized iron-based catalysts, molecular state and dissociative state CO existed together. The addition of Zr was preferential for the molecular state adsorption of CO on iron-based catalyst and the presence of Ni was beneficial to the dissociative adsorption of CO. On H2 reduced catalysts, hydrogen mainly adsorbs on the surface iron sites and surface oxide sites. On CO reduced catalysts, hydrogen probably existed as the most stable CH and OH species. The addition of Zr was not benefit to the dissociative adsorption of hydrogen on iron-based catalyst and the presence of Ni was preferential for the dissociative adsorption of hydrogen.

Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

A Comparison among Wolf Pack Search and Four other Optimization Algorithms

The main objective of this paper is applying a comparison between the Wolf Pack Search (WPS) as a newly introduced intelligent algorithm with several other known algorithms including Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL), Binary and Continues Genetic algorithms. All algorithms are applied on two benchmark cost functions. The aim is to identify the best algorithm in terms of more speed and accuracy in finding the solution, where speed is measured in terms of function evaluations. The simulation results show that the SFL algorithm with less function evaluations becomes first if the simulation time is important, while if accuracy is the significant issue, WPS and PSO would have a better performance.

Conventional and PSO Based Approaches for Model Reduction of SISO Discrete Systems

Reduction of Single Input Single Output (SISO) discrete systems into lower order model, using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method the original discrete system is, first, converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively, the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. The reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.

Myotonometry Method for Assessment Muscle Performance

The aim of this paper is to present the role of myotonometry in assessment muscle viscoelasticity by measurement of force index (IF) and stiffness (S) at thigh muscle groups. The results are used for improve the muscle training. The method is based on mechanic impulse on the muscle group, that involve a muscle response like acceleration, speed and amplitude curves. From these we have information about elasticity, stiffness beginning from mechanic oscillations of muscle tissue. Using this method offer the possibility for monitoring the muscle capacity for produce mechanic energy, that allows a efficiency of movement with a minimal tissue deformation.

Security of Mobile Agent in Ad hoc Network using Threshold Cryptography

In a very simple form a Mobile Agent is an independent piece of code that has mobility and autonomy behavior. One of the main advantages of using Mobile Agent in a network is - it reduces network traffic load. In an, ad hoc network Mobile Agent can be used to protect the network by using agent based IDS or IPS. Besides, to deploy dynamic software in the network or to retrieve information from network nodes Mobile Agent can be useful. But in an ad hoc network the Mobile Agent itself needs some security. Security services should be guaranteed both for Mobile Agent and for Agent Server. In this paper to protect the Mobile Agent and Agent Server in an ad hoc network we have proposed a solution which is based on Threshold Cryptography, a new vibe in the cryptographic world where trust is distributed among multiple nodes in the network.

A Complexity-Based Approach in Image Compression using Neural Networks

In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.