Impact of Faults in Different Software Systems: A Survey

Software maintenance is extremely important activity in software development life cycle. It involves a lot of human efforts, cost and time. Software maintenance may be further subdivided into different activities such as fault prediction, fault detection, fault prevention, fault correction etc. This topic has gained substantial attention due to sophisticated and complex applications, commercial hardware, clustered architecture and artificial intelligence. In this paper we surveyed the work done in the field of software maintenance. Software fault prediction has been studied in context of fault prone modules, self healing systems, developer information, maintenance models etc. Still a lot of things like modeling and weightage of impact of different kind of faults in the various types of software systems need to be explored in the field of fault severity.

Application of Pearson Parametric Distribution Model in Fatigue Life Reliability Evaluation

The aim of this paper is to introduce a parametric distribution model in fatigue life reliability analysis dealing with variation in material properties. Service loads in terms of responsetime history signal of Belgian pave were replicated on a multi-axial spindle coupled road simulator and stress-life method was used to estimate the fatigue life of automotive stub axle. A PSN curve was obtained by monotonic tension test and two-parameter Weibull distribution function was used to acquire the mean life of the component. A Pearson system was developed to evaluate the fatigue life reliability by considering stress range intercept and slope of the PSN curve as random variables. Considering normal distribution of fatigue strength, it is found that the fatigue life of the stub axle to have the highest reliability between 10000 – 15000 cycles. Taking into account the variation of material properties associated with the size effect, machining and manufacturing conditions, the method described in this study can be effectively applied in determination of probability of failure of mass-produced parts.

Information Technology Application for Knowledge Management in Medium-Size Businesses

Result of the study on knowledge management systems in businesses was shown that the most of these businesses provide internet accessibility for their employees in order to study new knowledge via internet, corporate website, electronic mail, and electronic learning system. These business organizations use information technology application for knowledge management because of convenience, time saving, ease of use, accuracy of information and knowledge usefulness. The result indicated prominent improvements for corporate knowledge management systems as the following; 1) administrations must support corporate knowledge management system 2) the goal of corporate knowledge management must be clear 3) corporate culture should facilitate the exchange and sharing of knowledge within the organization 4) cooperation of personnel of all levels must be obtained 5) information technology infrastructure must be provided 6) they must develop the system regularly and constantly. 

A Virtual Reality Laboratory for Distance Education in Chemistry

Simulations play a major role in education not only because they provide realistic models with which students can interact to acquire real world experiences, but also because they constitute safe environments in which students can repeat processes without any risk in order to perceive easier concepts and theories. Virtual reality is widely recognized as a significant technological advance that can facilitate learning process through the development of highly realistic 3D simulations supporting immersive and interactive features. The objective of this paper is to analyze the influence of virtual reality-s use in chemistry instruction as well as to present an integrated web-based learning environment for the simulation of chemical experiments. The proposed application constitutes a cost-effective solution for both schools and universities without appropriate infrastructure and a valuable tool for distance learning and life-long education in chemistry. Its educational objectives are the familiarization of students with the equipment of a real chemical laboratory and the execution of virtual volumetric analysis experiments with the active participation of students.

A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Text Mining Technique for Data Mining Application

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Application of Nano Cutting Fluid under Minimum Quantity Lubrication (MQL) Technique to Improve Grinding of Ti – 6Al – 4V Alloy

Minimum Quantity Lubrication (MQL) technique obtained a significant attention in machining processes to reduce environmental loads caused by usage of conventional cutting fluids. Recently nanofluids are finding an extensive application in the field of mechanical engineering because of their superior lubrication and heat dissipation characteristics. This paper investigates the use of a nanofluid under MQL mode to improve grinding characteristics of Ti-6Al-4V alloy. Taguchi-s experimental design technique has been used in the present investigation and a second order model has been established to predict grinding forces and surface roughness. Different concentrations of water based Al2O3 nanofluids were applied in the grinding operation through MQL setup developed in house and the results have been compared with those of conventional coolant and pure water. Experimental results showed that grinding forces reduced significantly when nano cutting fluid was used even at low concentration of the nano particles and surface finish has been found to improve with higher concentration of the nano particles.

From Hype to Ignorance – A Review of 30 Years of Lean Production

Lean production (or lean management respectively) gained popularity in several waves. The last three decades have been filled with numerous attempts to apply these concepts in companies. However, this has only been partially successful. The roots of lean production can be traced back to Toyota-s just-in-time production. This concept, which according to Womack-s, Jones- and Roos- research at MIT was employed by Japanese car manufacturers, became popular under its international names “lean production", “lean-manufacturing" and was termed “Schlanke Produktion" in Germany. This contribution shows a review about lean production in Germany over the last thirty years: development, trial & error and implementation as well.

Evolutionary Approach for Automated Discovery of Censored Production Rules

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Federal Open Agent System Platform

Open Agent System platform based on High Level Architecture is firstly proposed to support the application involving heterogeneous agents. The basic idea is to develop different wrappers for different agent systems, which are wrapped as federates to join a federation. The platform is based on High Level Architecture and the advantages for this open standard are naturally inherited, such as system interoperability and reuse. Especially, the federal architecture allows different federates to be heterogeneous so as to support the integration of different agent systems. Furthermore, both implicit communication and explicit communication between agents can be supported. Then, as the wrapper RTI_JADE an example, the components are discussed. Finally, the performance of RTI_JADE is analyzed. The results show that RTI_JADE works very efficiently.

A High Performance Technique in Harmonic Omitting Based on Predictive Current Control of a Shunt Active Power Filter

The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.

Face Recognition with Image Rotation Detection, Correction and Reinforced Decision using ANN

Rotation or tilt present in an image capture by digital means can be detected and corrected using Artificial Neural Network (ANN) for application with a Face Recognition System (FRS). Principal Component Analysis (PCA) features of faces at different angles are used to train an ANN which detects the rotation for an input image and corrected using a set of operations implemented using another system based on ANN. The work also deals with the recognition of human faces with features from the foreheads, eyes, nose and mouths as decision support entities of the system configured using a Generalized Feed Forward Artificial Neural Network (GFFANN). These features are combined to provide a reinforced decision for verification of a person-s identity despite illumination variations. The complete system performing facial image rotation detection, correction and recognition using re-enforced decision support provides a success rate in the higher 90s.

The Willingness of Business Students on T Innovative Behavior within the Theory of Planned Behavior

Classes on creativity, innovation, and entrepreneurship are becoming quite popular at universities throughout the world. However, it is not easy for business students to get involved to innovative activities, especially patent application. The present study investigated how to enhance business students- intention to participate in innovative activities and which incentives universities should consider. A 22-item research scale was used, and confirmatory factor analysis was conducted to verify its reliability and validity. Multiple regression and discriminant analyses were also conducted. The results demonstrate the effect of growth-need strength on innovative behavior and indicate that the theory of planned behavior can explain and predict business students- intention to participate in innovative activities. Additionally, the results suggest that applying our proposed model in practice would effectively strengthen business students- intentions to engage in innovative activities.

Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications

In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.

Bi-lingual Handwritten Character and Numeral Recognition using Multi-Dimensional Recurrent Neural Networks (MDRNN)

The key to the continued success of ANN depends, considerably, on the use of hybrid structures implemented on cooperative frame-works. Hybrid architectures provide the ability to the ANN to validate heterogeneous learning paradigms. This work describes the implementation of a set of Distributed and Hybrid ANN models for Character Recognition applied to Anglo-Assamese scripts. The objective is to describe the effectiveness of Hybrid ANN setups as innovative means of neural learning for an application like multilingual handwritten character and numeral recognition.

Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources

Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.

Study of Natural Convection in a Triangular Cavity Filled with Water: Application of the Lattice Boltzmann Method

The Lattice Boltzmann Method (LBM) with double populations is applied to solve the steady-state laminar natural convective heat transfer in a triangular cavity filled with water. The bottom wall is heated, the vertical wall is cooled, and the inclined wall is kept adiabatic. The buoyancy effect was modeled by applying the Boussinesq approximation to the momentum equation. The fluid velocity is determined by D2Q9 LBM and the energy equation is discritized by D2Q4 LBM to compute the temperature field. Comparisons with previously published work are performed and found to be in excellent agreement. Numerical results are obtained for a wide range of parameters: the Rayleigh number from  to  and the inclination angle from 0° to 360°. Flow and thermal fields were exhibited by means of streamlines and isotherms. It is observed that inclination angle can be used as a relevant parameter to control heat transfer in right-angled triangular enclosures.  

A Sub-mW Low Noise Amplifier for Wireless Sensor Networks

A 1.2 V, 0.61 mA bias current, low noise amplifier (LNA) suitable for low-power applications in the 2.4 GHz band is presented. Circuit has been implemented, laid out and simulated using a UMC 130 nm RF-CMOS process. The amplifier provides a 13.3 dB power gain a noise figure NF< 2.28 dB and a 1-dB compression point of -15.69 dBm, while dissipating 0.74 mW. Such performance make this design suitable for wireless sensor networks applications such as ZigBee.

Solver for a Magnetic Equivalent Circuit and Modeling the Inrush Current of a 3-Phase Transformer

Knowledge about the magnetic quantities in a magnetic circuit is always of great interest. On the one hand, this information is needed for the simulation of a transformer. On the other hand, parameter studies are more reliable, if the magnetic quantities are derived from a well established model. One possibility to model the 3-phase transformer is by using a magnetic equivalent circuit (MEC). Though this is a well known system, it is often not an easy task to set up such a model for a large number of lumped elements which additionally includes the nonlinear characteristic of the magnetic material. Here we show the setup of a solver for a MEC and the results of the calculation in comparison to measurements taken. The equations of the MEC are based on a rearranged system of the nodal analysis. Thus it is possible to achieve a minimum number of equations, and a clear and simple structure. Hence, it is uncomplicated in its handling and it supports the iteration process. Additional helpful tasks are implemented within the solver to enhance the performance. The electric circuit is described by an electric equivalent circuit (EEC). Our results for the 3-phase transformer demonstrate the computational efficiency of the solver, and show the benefit of the application of a MEC.

Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis

Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.