Testing Loaded Programs Using Fault Injection Technique

Fault tolerance is critical in many of today's large computer systems. This paper focuses on improving fault tolerance through testing. Moreover, it concentrates on the memory faults: how to access the editable part of a process memory space and how this part is affected. A special Software Fault Injection Technique (SFIT) is proposed for this purpose. This is done by sequentially scanning the memory of the target process, and trying to edit maximum number of bytes inside that memory. The technique was implemented and tested on a group of programs in software packages such as jet-audio, Notepad, Microsoft Word, Microsoft Excel, and Microsoft Outlook. The results from the test sample process indicate that the size of the scanned area depends on several factors. These factors are: process size, process type, and virtual memory size of the machine under test. The results show that increasing the process size will increase the scanned memory space. They also show that input-output processes have more scanned area size than other processes. Increasing the virtual memory size will also affect the size of the scanned area but to a certain limit.

Comparative Evaluation of Color-Based Video Signatures in the Presence of Various Distortion Types

The robustness of color-based signatures in the presence of a selection of representative distortions is investigated. Considered are five signatures that have been developed and evaluated within a new modular framework. Two signatures presented in this work are directly derived from histograms gathered from video frames. The other three signatures are based on temporal information by computing difference histograms between adjacent frames. In order to obtain objective and reproducible results, the evaluations are conducted based on several randomly assembled test sets. These test sets are extracted from a video repository that contains a wide range of broadcast content including documentaries, sports, news, movies, etc. Overall, the experimental results show the adequacy of color-histogram-based signatures for video fingerprinting applications and indicate which type of signature should be preferred in the presence of certain distortions.

The Investigation of the Role of Institutions in the Process of Growth and Development of Economy

The new institutional Economics helps generalization and expansion of new classic by adding the institution theories to Economic. It is clear that the appropriate institution is among the factors that lead to success in Economic programs. If the institutional are appropriate, the society will save the source and when we make use of time to apply the program, there will be welfare and average revenue product will also increase. In Economy, one should not expect the real manifestation of Economic programs only with a model for estimating and predicting rather institutions of the same purpose and along with production are needed to form the process of growth and development costs. In this research, the institution role in transaction costs, financial markets, distribution of revenue and capital and its influence on the process of growth and development are investigated so that handicaps and problems of Iran Economic Institutions can be recognized. In other words, incapability, non productivity and ambiguity of the institution in Iran Economic are some of the factors that handicap Economic growth and development. For example, Iran government as an important institution while having 20 ministries,83 organizations and 60 years of programming could not go along the growth and development but why?

A Dynamic Model of Air Pollution, Health,and Population Growth Using System Dynamics: A Study on Tehran-Iran (With Computer Simulation by the Software Vensim)

The significance of environmental protection is wellknown in today's world. The execution of any program depends on sufficient knowledge and required familiarity with environment and its pollutants. Taking advantage of a systematic method, as a new science, in environmental planning can solve many problems. In this article, air pollution in Tehran and its relationship with health and population growth have been analyzed using dynamic systems. Firstly, by using casual loops, the relationship between the parameters effective on air pollution in Tehran were taken into consideration, then these casual loops were turned into flow diagrams [6], and finally, they were simulated using the software Vensim [16]in order to conclude what the effect of each parameter will be on air pollution in Tehran in the next 10 years, how changing of one or more parameters influences other parameters, and which parameter among all other parameters requires to be controlled more.

Removal of CO2 and H2S using Aqueous Alkanolamine Solusions

This work presents a theoretical investigation of the simultaneous absorption of CO2 and H2S into aqueous solutions of MDEA and DEA. In this process the acid components react with the basic alkanolamine solution via an exothermic, reversible reaction in a gas/liquid absorber. The use of amine solvents for gas sweetening has been investigated using process simulation programs called HYSYS and ASPEN. We use Electrolyte NRTL and Amine Package and Amines (experimental) equation of state. The effects of temperature and circulation rate and amine concentration and packed column and murphree efficiency on the rate of absorption were studied. When lean amine flow and concentration increase, CO2 and H2S absorption increase too. With the improvement of inlet amine temperature in absorber, CO2 and H2S penetrate to upper stages of absorber and absorption of acid gases in absorber decreases. The CO2 concentration in the clean gas can be greatly influenced by the packing height, whereas for the H2S concentration in the clean gas the packing height plays a minor role. HYSYS software can not estimate murphree efficiency correctly and it applies the same contributions in all diagrams for HYSYS software. By improvement in murphree efficiency, maximum temperature of absorber decrease and the location of reaction transfer to the stages of bottoms absorber and the absorption of acid gases increase.

The Bipartite Ramsey Numbers b(C2m; C2n)

Given bipartite graphs H1 and H2, the bipartite Ramsey number b(H1;H2) is the smallest integer b such that any subgraph G of the complete bipartite graph Kb,b, either G contains a copy of H1 or its complement relative to Kb,b contains a copy of H2. It is known that b(K2,2;K2,2) = 5, b(K2,3;K2,3) = 9, b(K2,4;K2,4) = 14 and b(K3,3;K3,3) = 17. In this paper we study the case that both H1 and H2 are even cycles, prove that b(C2m;C2n) ≥ m + n - 1 for m = n, and b(C2m;C6) = m + 2 for m ≥ 4.

A New Technique for Multi Resolution Characterization of Epileptic Spikes in EEG

A technique proposed for the automatic detection of spikes in electroencephalograms (EEG). A multi-resolution approach and a non-linear energy operator are exploited. The signal on each EEG channel is decomposed into three sub bands using a non-decimated wavelet transform (WT). The WT is a powerful tool for multi-resolution analysis of non-stationary signal as well as for signal compression, recognition and restoration. Each sub band is analyzed by using a non-linear energy operator, in order to detect spikes. A decision rule detects the presence of spikes in the EEG, relying upon the energy of the three sub-bands. The effectiveness of the proposed technique was confirmed by analyzing both test signals and EEG layouts.

A Survey on Performance Tools for OpenMP

Advances in processors architecture, such as multicore, increase the size of complexity of parallel computer systems. With multi-core architecture there are different parallel languages that can be used to run parallel programs. One of these languages is OpenMP which embedded in C/Cµ or FORTRAN. Because of this new architecture and the complexity, it is very important to evaluate the performance of OpenMP constructs, kernels, and application program on multi-core systems. Performance is the activity of collecting the information about the execution characteristics of a program. Performance tools consists of at least three interfacing software layers, including instrumentation, measurement, and analysis. The instrumentation layer defines the measured performance events. The measurement layer determines what performance event is actually captured and how it is measured by the tool. The analysis layer processes the performance data and summarizes it into a form that can be displayed in performance tools. In this paper, a number of OpenMP performance tools are surveyed, explaining how each is used to collect, analyse, and display data collection.

Dynamics and Control of a Chaotic Electromagnetic System

In this paper, different nonlinear dynamics analysis techniques are employed to unveil the rich nonlinear phenomena of the electromagnetic system. In particular, bifurcation diagrams, time responses, phase portraits, Poincare maps, power spectrum analysis, and the construction of basins of attraction are all powerful and effective tools for nonlinear dynamics problems. We also employ the method of Lyapunov exponents to show the occurrence of chaotic motion and to verify those numerical simulation results. Finally, two cases of a chaotic electromagnetic system being effectively controlled by a reference signal or being synchronized to another nonlinear electromagnetic system are presented.

Personal Health Assistance Service Expert System (PHASES)

In this paper the authors present the framework of a system for assisting users through counseling on personal health, the Personal Health Assistance Service Expert System (PHASES). Personal health assistance systems need Personal Health Records (PHR), which support wellness activities, improve the understanding of personal health issues, enable access to data from providers of health services, strengthen health promotion, and in the end improve the health of the population. This is especially important in societies where the health costs increase at a higher rate than the overall economy. The most important elements of a healthy lifestyle are related to food (such as balanced nutrition and diets), activities for body fitness (such as walking, sports, fitness programs), and other medical treatments (such as massage, prescriptions of drugs). The PHASES framework uses an ontology of food, which includes nutritional facts, an expert system keeping track of personal health data that are matched with medical treatments, and a comprehensive data transfer between patients and the system.

Analysis of Sonogram Images of Thyroid Gland Based on Wavelet Transform

Sonogram images of normal and lymphocyte thyroid tissues have considerable overlap which makes it difficult to interpret and distinguish. Classification from sonogram images of thyroid gland is tackled in semiautomatic way. While making manual diagnosis from images, some relevant information need not to be recognized by human visual system. Quantitative image analysis could be helpful to manual diagnostic process so far done by physician. Two classes are considered: normal tissue and chronic lymphocyte thyroid (Hashimoto's Thyroid). Data structure is analyzed using K-nearest-neighbors classification. This paper is mentioned that unlike the wavelet sub bands' energy, histograms and Haralick features are not appropriate to distinguish between normal tissue and Hashimoto's thyroid.

Effect of Non Uniformity Factors and Assignment Factors on Errors in Charge Simulation Method with Point Charge Model

Charge Simulation Method (CSM) is one of the very widely used numerical field computation technique in High Voltage (HV) engineering. The high voltage fields of varying non uniformities are encountered in practice. CSM programs being case specific, the simulation accuracies heavily depend on the user (programmers) experience. Here is an effort to understand CSM errors and evolve some guidelines to setup accurate CSM models, relating non uniformities with assignment factors. The results are for the six-point-charge model of sphere-plane gap geometry. Using genetic algorithm (GA) as tool, optimum assignment factors at different non uniformity factors for this model have been evaluated and analyzed. It is shown that the symmetrically placed six-point-charge models can be good enough to set up CSM programs with potential errors less than 0.1% when the field non uniformity factor is greater than 2.64 (field utilization factor less than 52.76%).

A Novel Nano-Scaled SRAM Cell

To help overcome limits to the density of conventional SRAMs and leakage current of SRAM cell in nanoscaled CMOS technology, we have developed a four-transistor SRAM cell. The newly developed CMOS four-transistor SRAM cell uses one word-line and one bit-line during read/write operation. This cell retains its data with leakage current and positive feedback without refresh cycle. The new cell size is 19% smaller than a conventional six-transistor cell using same design rules. Also the leakage current of new cell is 60% smaller than a conventional sixtransistor SRAM cell. Simulation result in 65nm CMOS technology shows new cell has correct operation during read/write operation and idle mode.

Landowers' Participation Behavior on the Payment for Environmental Service (PES): Evidences from Taiwan

To respond to the Kyoto Protocol, the policy of Payment for Environmental Service (PES), which was entitled “Plain Landscape Afforestation Program (PLAP)", was certified by Executive Yuan in Taiwan on 31 August 2001 and has been implementing for six years since 1 January 2002. Although the PLAP has received a lot of positive comments, there are still many difficulties during the process of implementation, such as insufficient technology for afforestation, private landowners- low interests in participating in PLAP, insufficient subsidies, and so on, which are potential threats that hinder the PLAP from moving forward in future. In this paper, selecting Ping-Tung County in Taiwan as a sample region and targeting those private landowners with and without intention to participate in the PLAP, respectively, we conduct an empirical analysis based on the Logit model to investigate the factors that determine whether those private landowners join the PLAP, so as to realize the incentive effects of the PLAP upon the personal decision on afforestation. The possible factors that might determine private landowner-s participation in the PLAP include landowner-s characteristics, cropland characteristics, as well as policy factors. Among them, the policy factors include afforestation subsidy amount (+), duration of afforestation subsidy (+), the rules on adjoining and adjacent areas (+), and so on, which do not reach the remarkable level in statistics though, but the directions of variable signs are consistent with the intuition behind the policy. As for the landowners- characteristics, each of age (+), education level (–), and annual household income (+) variables reaches 10% of the remarkable level in statistics; as for the cropland characteristics, each of cropland area (+), cropland price (–), and the number of cropland parcels (–) reaches 1% of the remarkable level in statistics. In light of the above, the cropland characteristics are the dominate factor that determines the probability of landowner-s participation in the PLAP. In the Logit model established by this paper, the probability of correctly estimating nonparticipants is 98%, the probability of correctly estimating the participants is 71.8%, and the probability for the overall estimation is 95%. In addition, Hosmer-Lemeshow test and omnibus test also revealed that the Logit model in this paper may provide fine goodness of fit and good predictive power in forecasting private landowners- participation in this program. The empirical result of this paper expects to help the implementation of the afforestation programs in Taiwan.

Image Contrast Enhancement based Sub-histogram Equalization Technique without Over-equalization Noise

In order to enhance the contrast in the regions where the pixels have similar intensities, this paper presents a new histogram equalization scheme. Conventional global equalization schemes over-equalizes these regions so that too bright or dark pixels are resulted and local equalization schemes produce unexpected discontinuities at the boundaries of the blocks. The proposed algorithm segments the original histogram into sub-histograms with reference to brightness level and equalizes each sub-histogram with the limited extents of equalization considering its mean and variance. The final image is determined as the weighted sum of the equalized images obtained by using the sub-histogram equalizations. By limiting the maximum and minimum ranges of equalization operations on individual sub-histograms, the over-equalization effect is eliminated. Also the result image does not miss feature information in low density histogram region since the remaining these area is applied separating equalization. This paper includes how to determine the segmentation points in the histogram. The proposed algorithm has been tested with more than 100 images having various contrasts in the images and the results are compared to the conventional approaches to show its superiority.

Generating Speq Rules based on Automatic Proof of Logical Equivalence

In the Equivalent Transformation (ET) computation model, a program is constructed by the successive accumulation of ET rules. A method by meta-computation by which a correct ET rule is generated has been proposed. Although the method covers a broad range in the generation of ET rules, all important ET rules are not necessarily generated. Generation of more ET rules can be achieved by supplementing generation methods which are specialized for important ET rules. A Specialization-by-Equation (Speq) rule is one of those important rules. A Speq rule describes a procedure in which two variables included in an atom conjunction are equalized due to predicate constraints. In this paper, we propose an algorithm that systematically and recursively generate Speq rules and discuss its effectiveness in the synthesis of ET programs. A Speq rule is generated based on proof of a logical formula consisting of given atom set and dis-equality. The proof is carried out by utilizing some ET rules and the ultimately obtained rules in generating Speq rules.

Medical Image Segmentation Based On Vigorous Smoothing and Edge Detection Ideology

Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.

An Automatic Tool for Checking Consistency between Data Flow Diagrams (DFDs)

System development life cycle (SDLC) is a process uses during the development of any system. SDLC consists of four main phases: analysis, design, implement and testing. During analysis phase, context diagram and data flow diagrams are used to produce the process model of a system. A consistency of the context diagram to lower-level data flow diagrams is very important in smoothing up developing process of a system. However, manual consistency check from context diagram to lower-level data flow diagrams by using a checklist is time-consuming process. At the same time, the limitation of human ability to validate the errors is one of the factors that influence the correctness and balancing of the diagrams. This paper presents a tool that automates the consistency check between Data Flow Diagrams (DFDs) based on the rules of DFDs. The tool serves two purposes: as an editor to draw the diagrams and as a checker to check the correctness of the diagrams drawn. The consistency check from context diagram to lower-level data flow diagrams is embedded inside the tool to overcome the manual checking problem.

Exploiting Global Self Similarity for Head-Shoulder Detection

People detection from images has a variety of applications such as video surveillance and driver assistance system, but is still a challenging task and more difficult in crowded environments such as shopping malls in which occlusion of lower parts of human body often occurs. Lack of the full-body information requires more effective features than common features such as HOG. In this paper, new features are introduced that exploits global self-symmetry (GSS) characteristic in head-shoulder patterns. The features encode the similarity or difference of color histograms and oriented gradient histograms between two vertically symmetric blocks. The domain-specific features are rapid to compute from the integral images in Viola-Jones cascade-of-rejecters framework. The proposed features are evaluated with our own head-shoulder dataset that, in part, consists of a well-known INRIA pedestrian dataset. Experimental results show that the GSS features are effective in reduction of false alarmsmarginally and the gradient GSS features are preferred more often than the color GSS ones in the feature selection.

Automatic Segmentation of Thigh Magnetic Resonance Images

Purpose: To develop a method for automatic segmentation of adipose and muscular tissue in thighs from magnetic resonance images. Materials and methods: Thirty obese women were scanned on a Siemens Impact Expert 1T resonance machine. 1500 images were finally used in the tests. The developed segmentation method is a recursive and multilevel process that makes use of several concepts such as shaped histograms, adaptative thresholding and connectivity. The segmentation process was implemented in Matlab and operates without the need of any user interaction. The whole set of images were segmented with the developed method. An expert radiologist segmented the same set of images following a manual procedure with the aid of the SliceOmatic software (Tomovision). These constituted our 'goal standard'. Results: The number of coincidental pixels of the automatic and manual segmentation procedures was measured. The average results were above 90 % of success in most of the images. Conclusions: The proposed approach allows effective automatic segmentation of MRIs from thighs, comparable to expert manual performance.