Effective Defect Prevention Approach in Software Process for Achieving Better Quality Levels

Defect prevention is the most vital but habitually neglected facet of software quality assurance in any project. If functional at all stages of software development, it can condense the time, overheads and wherewithal entailed to engineer a high quality product. The key challenge of an IT industry is to engineer a software product with minimum post deployment defects. This effort is an analysis based on data obtained for five selected projects from leading software companies of varying software production competence. The main aim of this paper is to provide information on various methods and practices supporting defect detection and prevention leading to thriving software generation. The defect prevention technique unearths 99% of defects. Inspection is found to be an essential technique in generating ideal software generation in factories through enhanced methodologies of abetted and unaided inspection schedules. On an average 13 % to 15% of inspection and 25% - 30% of testing out of whole project effort time is required for 99% - 99.75% of defect elimination. A comparison of the end results for the five selected projects between the companies is also brought about throwing light on the possibility of a particular company to position itself with an appropriate complementary ratio of inspection testing.

Simulation of Particle Damping under Centrifugal Loads

Particle damping is a technique to reduce the structural vibrations by means of placing small metallic particles inside a cavity that is attached to the structure at location of high vibration amplitudes. In this paper, we have presented an analytical model to simulate the particle damping of two dimensional transient vibrations in structure operating under high centrifugal loads. The simulation results show that this technique remains effective as long as the ratio of the dynamic acceleration of the structure to the applied centrifugal load is more than 0.1. Particle damping increases with the increase of particle to structure mass ratio. However, unlike to the case of particle damping in the absence of centrifugal loads where the damping efficiency strongly depends upon the size of the cavity, here this dependence becomes very weak. Despite the simplicity of the model, the simulation results are considerably in good agreement with the very scarce experimental data available in the literature for particle damping under centrifugal loads.

A Frugal Bidding Procedure for Replicating WWW Content

Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.

Reliability Assessment of Bangladesh Power System Using Recursive Algorithm

An electric utility-s main concern is to plan, design, operate and maintain its power supply to provide an acceptable level of reliability to its users. This clearly requires that standards of reliability be specified and used in all three sectors of the power system, i.e., generation, transmission and distribution. That is why reliability of a power system is always a major concern to power system planners. This paper presents the reliability analysis of Bangladesh Power System (BPS). Reliability index, loss of load probability (LOLP) of BPS is evaluated using recursive algorithm and considering no de-rated states of generators. BPS has sixty one generators and a total installed capacity of 5275 MW. The maximum demand of BPS is about 5000 MW. The relevant data of the generators and hourly load profiles are collected from the National Load Dispatch Center (NLDC) of Bangladesh and reliability index 'LOLP' is assessed for the period of last ten years.

Testing Loaded Programs Using Fault Injection Technique

Fault tolerance is critical in many of today's large computer systems. This paper focuses on improving fault tolerance through testing. Moreover, it concentrates on the memory faults: how to access the editable part of a process memory space and how this part is affected. A special Software Fault Injection Technique (SFIT) is proposed for this purpose. This is done by sequentially scanning the memory of the target process, and trying to edit maximum number of bytes inside that memory. The technique was implemented and tested on a group of programs in software packages such as jet-audio, Notepad, Microsoft Word, Microsoft Excel, and Microsoft Outlook. The results from the test sample process indicate that the size of the scanned area depends on several factors. These factors are: process size, process type, and virtual memory size of the machine under test. The results show that increasing the process size will increase the scanned memory space. They also show that input-output processes have more scanned area size than other processes. Increasing the virtual memory size will also affect the size of the scanned area but to a certain limit.

Reform-Oriented Teaching of Introductory Statistics in the Health, Social and Behavioral Sciences – Historical Context and Rationale

There is widespread emphasis on reform in the teaching of introductory statistics at the college level. Underpinning this reform is a consensus among educators and practitioners that traditional curricular materials and pedagogical strategies have not been effective in promoting statistical literacy, a competency that is becoming increasingly necessary for effective decision-making and evidence-based practice. This paper explains the historical context of, and rationale for reform-oriented teaching of introductory statistics (at the college level) in the health, social and behavioral sciences (evidence-based disciplines). A firm understanding and appreciation of the basis for change in pedagogical approach is important, in order to facilitate commitment to reform, consensus building on appropriate strategies, and adoption and maintenance of best practices. In essence, reform-oriented pedagogy, in this context, is a function of the interaction among content, pedagogy, technology, and assessment. The challenge is to create an appropriate balance among these domains.

Prospective Mathematics Teachers' Views about Using Flash Animations in Mathematics Lessons

The purpose of the study is to determine secondary prospective mathematics teachers- views related to using flash animations in mathematics lessons and to reveal how the sample presentations towards different mathematical concepts altered their views. This is a case study involving three secondary prospective mathematics teachers from a state university in Turkey. The data gathered from two semi-structural interviews. Findings revealed that these animations help understand mathematics meaningfully, relate mathematics and real world, visualization, and comprehend the importance of mathematics. The analysis of the data indicated that the sample presentations enhanced participants- views about using flash animations in mathematics lessons.

Hydrophobic Characteristics of EPDM Composite Insulators in Simulated Arid Desert Environment

Overhead electrical insulators form an important link in an electric power system. Along with the traditional insulators (i.e. glass and porcelain, etc) presently the polymeric insulators are also used world widely. These polymeric insulators are very sensitive to various environmental parameters such temperature, environmental pollution, UV-radiations, etc. which seriously effect their electrical, chemical and hydrophobic properties. The UV radiation level in the central region of Saudi Arabia is high as compared to the IEC standard for the accelerated aging of the composite insulators. Commonly used suspension type of composite EPDM (Ethylene Propylene Diene Monomer) insulator was subjected to accelerated stress aging as per modified IEC standard simulating the inland arid deserts atmospheric condition and also as per IEC-61109 standard. The hydrophobic characteristics were studied by measuring the contact angle along the insulator surface before and after the accelerated aging of the samples. It was found that EPDM insulator loses it hydrophobic properties proportional to the intensity of UV irradiations and its rate of recovery is also very low as compared to Silicone Rubber insulator.KeywordsEPDM, composite insulators, accelerated aging, hydrophobicity, contact angle.

A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Space Charge Distribution in 22 kV XLPE Insulated Cable by Using Pulse Electroacoustic Measurement Technique

This paper presents the experimental results on space charge distribution in cross-linked polyethylene (XLPE) insulating material for 22 kV power distribution system cable by using pulse electroacoustic measurement technique (PEA). Numbers of XLPE insulating material ribbon having thickness 60 μm taken from unused 22 kV high voltage cable were used as specimen in this study. DC electric field stress was applied to test specimen at room temperature (25°C). Four levels of electric field stress, 25 kV/mm, 50 kV/mm, 75 kV/mm and 100 kV/mm, were used. In order to investigate space charge distribution characteristic, space charge distribution characteristics were measured after applying electric field stress 15 min, 30 min and 60 min, respectively. The results show that applied time and magnitude of dc electric field stress play an important role to the formation of space charge.

A Multiple Inlet Swirler for Gas Turbine Combustors

The central recirculation zone (CRZ) in a swirl stabilized gas turbine combustor has a dominant effect on the fuel air mixing process and flame stability. Most of state of the art swirlers share one disadvantage; the fixed swirl number for the same swirler configuration. Thus, in a mathematical sense, Reynolds number becomes the sole parameter for controlling the flow characteristics inside the combustor. As a result, at low load operation, the generated swirl is more likely to become feeble affecting the flame stabilization and mixing process. This paper introduces a new swirler concept which overcomes the mentioned weakness of the modern configurations. The new swirler introduces air tangentially and axially to the combustor through tangential vanes and an axial vanes respectively. Therefore, it provides different swirl numbers for the same configuration by regulating the ratio between the axial and tangential flow momenta. The swirler aerodynamic performance was investigated using four CFD simulations in order to demonstrate the impact of tangential to axial flow rate ratio on the CRZ. It was found that the length of the CRZ is directly proportional to the tangential to axial air flow rate ratio.

To Join or Not to Join: The Effects of Healthcare Networks

This study uses a simulation to establish a realistic environment for laboratory research on Accountable Care Organizations. We study network attributes in order to gain insights regarding healthcare providers- conduct and performance. Our findings indicate how network structure creates significant differences in organizational performance. We demonstrate how healthcare providers positioning themselves at the central, pivotal point of the network while maintaining their alliances with their partners produce better outcomes.

Comparative Evaluation of Color-Based Video Signatures in the Presence of Various Distortion Types

The robustness of color-based signatures in the presence of a selection of representative distortions is investigated. Considered are five signatures that have been developed and evaluated within a new modular framework. Two signatures presented in this work are directly derived from histograms gathered from video frames. The other three signatures are based on temporal information by computing difference histograms between adjacent frames. In order to obtain objective and reproducible results, the evaluations are conducted based on several randomly assembled test sets. These test sets are extracted from a video repository that contains a wide range of broadcast content including documentaries, sports, news, movies, etc. Overall, the experimental results show the adequacy of color-histogram-based signatures for video fingerprinting applications and indicate which type of signature should be preferred in the presence of certain distortions.

Encryption Efficiency Analysis and Security Evaluation of RC6 Block Cipher for Digital Images

This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.

High Efficiency, Selectivity against Cancer Cell Line of Purified L-Asparaginase from Pathogenic Escherichia coli

L-asparaginase was extracted from pathogenic Escherichia coli which was isolated from urinary tract infection patients. L-asparaginase was purified 96-fold by ultrafiltration, ion exchange and gel filtration giving 39.19% yield with final specific activity of 178.57 IU/mg. L-asparaginase showed 138,356±1,000 Dalton molecular weight with 31024±100 Dalton molecular mass. Kinetic properties of enzyme resulting 1.25×10-5 mM Km and 2.5×10-3 M/min Vmax. L-asparaginase showed a maximum activity at pH 7.5 when incubated at 37 ºC for 30 min and illustrated its full activity (100%) after 15 min incubation at 20-37 ºC, while 70% of its activity was lost when incubated at 60 ºC. L-asparaginase showed cytotoxicity to U937 cell line with IC50 0.5±0.19 IU/ml, and selectivity index (SI=7.6) about 8 time higher selectivity over the lymphocyte cells. Therefore, the local pathogenic E. coli strains may be used as a source of high yield of L-asparaginase to produce anti cancer agent with high selectivity.

Phase Error Accumulation Methodology for On-Chip Cell Characterization

This paper describes the design of new method of propagation delay measurement in micro and nanostructures during characterization of ASIC standard library cell. Providing more accuracy timing information about library cell to the design team we can improve a quality of timing analysis inside of ASIC design flow process. Also, this information could be very useful for semiconductor foundry team to make correction in technology process. By comparison of the propagation delay in the CMOS element and result of analog SPICE simulation. It was implemented as digital IP core for semiconductor manufacturing process. Specialized method helps to observe the propagation time delay in one element of the standard-cell library with up-to picoseconds accuracy and less. Thus, the special useful solutions for VLSI schematic to parameters extraction, basic cell layout verification, design simulation and verification are announced.

Design for Manufacturability and Concurrent Engineering for Product Development

In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.

The Role of the Ethnos of Intellect in Legal and Informatical Observation of “Information Society“

By the end of XX century in the structure of humanity some changes have been provoked: a new ethnos - Ethnos of Intellect is formed and is still being formed, beside the historical types of ethnoses: open ethnos, closed ethnos, wandering ethnos, dead ethnos, - and this event was caused by the technical progress, development of informational and transport communications, especially - by creation of Internet. The Ethnos of Intellect is something very close to the ÔÇ×Information Society“ described by J. Ellule and Y. Masuda that was regarded as the culture of XXI century, being an antithesis for technical and technicistical civilizations, but it-s necessary to indicate also the essential difference between these concepts: the Ethnos of Intellect is the antithesis of Socium. The existence of such an ethnos within human society that has already become an Information Society itself is extremely important in observing legally and informatically a new kind of reins in the hands of the political power, revealing every attempt to violate the human rights of simple citizens. A concrete example of some conjunction points of legal informatics and informatical law in a certain kind of ambiental studies of the project ''State Registre of Population'' in Russia is very eloquent.

Relational Framework and its Applications

This paper has, as its point of departure, the foundational axiomatic theory of E. De Giorgi (1996, Scuola Normale Superiore di Pisa, Preprints di Matematica 26, 1), based on two primitive notions of quality and relation. With the introduction of a unary relation, we develop a system totally based on the sole primitive notion of relation. Such a modification enables a definition of the concept of dynamic unary relation. In this way we construct a simple language capable to express other well known theories such as Robinson-s arithmetic or a piece of a theory of concatenation. A key role in this system plays an abstract relation designated by “( )", which can be interpreted in different ways, but in this paper we will focus on the case when we can perform computations and obtain results.

An Efficient Technique for Extracting Fuzzy Rulesfrom Neural Networks

Artificial neural networks (ANN) have the ability to model input-output relationships from processing raw data. This characteristic makes them invaluable in industry domains where such knowledge is scarce at best. In the recent decades, in order to overcome the black-box characteristic of ANNs, researchers have attempted to extract the knowledge embedded within ANNs in the form of rules that can be used in inference systems. This paper presents a new technique that is able to extract a small set of rules from a two-layer ANN. The extracted rules yield high classification accuracy when implemented within a fuzzy inference system. The technique targets industry domains that possess less complex problems for which no expert knowledge exists and for which a simpler solution is preferred to a complex one. The proposed technique is more efficient, simple, and applicable than most of the previously proposed techniques.