Application Reliability Method for Concrete Dams

Probabilistic risk analysis models are used to provide a better understanding of the reliability and structural failure of works, including when calculating the stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of reliability analysis methods including the methods used in engineering. It is in our case, the use of level 2 methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type first order risk method (FORM) and the second order risk method (SORM). By way of comparison, a level three method was used which generates a full analysis of the problem and involves an integration of the probability density function of random variables extended to the field of security using the Monte Carlo simulation method. Taking into account the change in stress following load combinations: normal, exceptional and extreme acting on the dam, calculation of the results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities, thus causing a significant decrease in strength, shear forces then induce a shift that threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case the increase of uplift in a hypothetical default of the drainage system.

Investigation of Failures in Wadi-Crossing Pipe Culverts, Sennar State, Sudan

Crossing culverts are essential element of rural roads. The paper aims to investigate failures of recently constructed wadi-crossing pipe culverts in Sennar state and provide necessary remedial measures. The investigation is conducted to provide an extensive diagnosis study in order to find out the main structural and hydrological weaknesses of the culverts. Literature of steel pipe culverts related to construction practices and common types of culvert failures and their appropriate mitigation measures were reviewed. A detailed field survey was conducted to detect failures and defects appeared on the existing culverts. The results revealed that seepage of water through the embankment and foundation of the culverts leads to excessive erosion and scouring causing sever failures and damages. The design mistakes and poor construction were detected as the main causes of culverts failures. For sustainability of the culverts, various remedial measures are recommended to be considered in urgent rehabilitation of the existing crossings.

Investigating Causes of Pavement Deterioration in Khartoum State, Sudan

It is quite essential to investigate the causes of pavement deterioration in order to select the proper maintenance technique. The objective of this study was to identify factors cause deterioration of recently constructed roads in Khartoum state. A comprehensive literature concerning the factors of road deterioration, common road defects and their causes were reviewed. Three major road projects with different deterioration reasons were selected for this study. The investigation involved field survey and laboratory testing on those projects to examine the existing pavement conditions. The results revealed that the roads investigated experienced severe failures in the forms of cracks, potholes, and rutting in the wheel path. The causes of those failures were found mainly linked to poor drainage, traffic overloading, expansive subgrade soils, and the use of low quality materials in construction. Based on the results, recommendations were provided to help highway engineers in selecting the most effective repair techniques for specific kinds of distresses.

Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm

The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Trustworthy Link Failure Recovery Algorithm for Highly Dynamic Mobile Adhoc Networks

The Trustworthy link failure recovery algorithm is introduced in this paper, to provide the forwarding continuity even with compound link failures. The ephemeral failures are common in IP networks and it also has some proposals based on local rerouting. To ensure forwarding continuity, we are introducing the compound link failure recovery algorithm, even with compound link failures. For forwarding the information, each packet carries a blacklist, which is a min set of failed links encountered along its path, and the next hop is chosen by excluding the blacklisted links. Our proposed method describes how it can be applied to ensure forwarding to all reachable destinations in case of any two or more link or node failures in the network. After simulating with NS2 contains lot of samples proved that the proposed protocol achieves exceptional concert even under elevated node mobility using Trustworthy link Failure Recovery Algorithm.

The Urban Project and the Urban Improvement to the Test of the Participation, Case: Project of Modernization of Constantine

In the framework of the modernization of the city of Constantine and in order to restore its status as a regional metropolis introducing it into the network of cities international metropolises, has major urban project was launched: project of modernization and of metropolitanization of the city of Constantine. Our research project focuses on the management of the project for the modernization of the city of Constantine (PMMC) focusing on the management of some aspects of the urban project whose participation, with the objective assessment of the managerial approach business. In this contribution, we focus on two cases revealing taken into account in our research work on the question of participation of actors and their organizations. It is "the urban project of modernization of Constantine" and the operation relating to "the urban improvement in the city of the Brothers FERRAD in the district of Zouaghi". This project and this operation with the objective of improving the living conditions of citizens have faced several challenges and obstacles that have been in major part the factors of its failures. Through this study, we examined the management process and the mode of organization of the actors of the project as well as the level of participation of the citizen to finally proposed managerial solutions toconflict situations observed.

Seismic Performance of Reinforced Concrete Frames Infilled by Masonry Walls with Different Heights

This study carried out comparative seismic performance of reinforced concrete frames infilled by masonry walls with different heights. Partial and fully infilled reinforced concrete frames were modeled for the research objectives and the analysis model for a bare reinforced concrete frame was also established for comparison. Non–linear static analyses for the studied frames were performed to investigate their structural behavior under extreme seismic loads and to find out their collapse mechanism. It was observed from analysis results that the strengths of the partial infilled reinforced concrete frames are increased and their ductilities are reduced, as infilled masonry walls are higher. Especially, reinforced concrete frames with higher partial infilled masonry walls would experience shear failures. Non–linear dynamic analyses using 10 earthquake records show that the bare and fully infilled reinforced concrete frame present stable collapse mechanism while the reinforced concrete frames with partially infilled masonry walls collapse in more brittle manner due to short-column effects.

Determining Occurrence in FMEA Using Hazard Function

FMEA has been used for several years and proved its efficiency for system’s risk analysis due to failures. Risk priority number found in FMEA is used to rank failure modes that may occur in a system. There are some guidelines in the literature to assign the values of FMEA components known as Severity, Occurrence and Detection. This paper propose a method to assign the value for occurrence in more realistic manner representing the state of the system under study rather than depending totally on the experience of the analyst. This method uses the hazard function of a system to determine the value of occurrence depending on the behavior of the hazard being constant, increasing or decreasing.

The Use of Degradation Measures to Design Reliability Test Plans

With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. In this work we present a case study involving an electronic component subject to degradation. The data, consisting of 42 degradation paths of cycles to failure, are first used to estimate a reliability function. Bootstrapping techniques are then used to perform power studies and develop a minimal reliability test plan for future production of this component. 

Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Assessment of Landslide Volume for Alishan Highway Based On Database of Rainfall-Induced Slope Failure

In this paper, a study of slope failures along the Alishan Highway is carried out. An innovative empirical model is developed based on 15-year records of rainfall-induced slope failures. The statistical models are intended for assessing the volume of landslide for slope failure along the Alishan Highway in the future. The rainfall data considered in the proposed models include the effective cumulative rainfall and the critical rainfall intensity. The effective cumulative rainfall is defined at the point when the curve of cumulative rainfall goes from steep to flat. Then, the rainfall thresholds of landslide are established for assessing the volume of landslide and issuing warning and/or closure for the Alishan Highway during a future extreme rainfall. Slope failures during Typhoon Saola in 2012 demonstrate that the new empirical model is effective and applicable to other cases with similar rainfall conditions.

Modeling and Simulation of a Serial Production Line with Constant Work-In-Process

This paper presents a model for an unreliable production line, which is operated according to demand with constant work-in-process (CONWIP). A simulation model is developed based on the discrete model and several case problems are analyzed using the model. The model is utilized to optimize storage space capacities at intermediate stages and the number of kanbans at the last stage, which is used to trigger the production at the first stage. Furthermore, effects of several line parameters on production rate are analyzed using design of experiments.

An Artificial Immune System for a Multi Agent Robotics System

This paper explores an application of an adaptive learning mechanism for robots based on the natural immune system. Most of the research carried out so far are based either on the innate or adaptive characteristics of the immune system, we present a combination of these to achieve behavior arbitration wherein a robot learns to detect vulnerable areas of a track and adapts to the required speed over such portions. The test bed comprises of two Lego robots deployed simultaneously on two predefined near concentric tracks with the outer robot capable of helping the inner one when it misaligns. The helper robot works in a damage-control mode by realigning itself to guide the other robot back onto its track. The panic-stricken robot records the conditions under which it was misaligned and learns to detect and adapt under similar conditions thereby making the overall system immune to such failures.

Fault Localization and Alarm Correlation in Optical WDM Networks

For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.

Engineered Cement Composite Materials Characterization for Tunneling Applications

Cements, which are intrinsically brittle materials, can exhibit a degree of pseudo-ductility when reinforced with a sufficient volume fraction of a fibrous phase. This class of materials, called Engineered Cement Composites (ECC) has the potential to be used in future tunneling applications where a level of pseudo-ductility is required to avoid brittle failures. However uncertainties remain regarding mechanical performance. Previous work has focused on comparatively thin specimens; however for future civil engineering applications, it is imperative that the behavior in tension of thicker specimens is understood. In the present work, specimens containing cement powder and admixtures have been manufactured following two different processes and tested in tension. Multiple matrix cracking has been observed during tensile testing, leading to a “strain-hardening" behavior, confirming the possible suitability of ECC material when used as thick sections (greater than 50mm) in tunneling applications.

Increasing Profitability Supported by Innovative Methods and Designing Monitoring Software in Condition-Based Maintenance: A Case Study

In the present article, a new method has been developed to enhance the application of equipment monitoring, which in turn results in improving condition-based maintenance economic impact in an automobile parts manufacturing factory. This study also describes how an effective software with a simple database can be utilized to achieve cost-effective improvements in maintenance performance. The most important results of this project are indicated here: 1. 63% reduction in direct and indirect maintenance costs. 2. Creating a proper database to analyse failures. 3. Creating a method to control system performance and develop it to similar systems. 4. Designing a software to analyse database and consequently create technical knowledge to face unusual condition of the system. Moreover, the results of this study have shown that the concept and philosophy of maintenance has not been understood in most Iranian industries. Thus, more investment is strongly required to improve maintenance conditions.

Bond Graph and Bayesian Networks for Reliable Diagnosis

Bond Graph as a unified multidisciplinary tool is widely used not only for dynamic modelling but also for Fault Detection and Isolation because of its structural and causal proprieties. A binary Fault Signature Matrix is systematically generated but to make the final binary decision is not always feasible because of the problems revealed by such method. The purpose of this paper is introducing a methodology for the improvement of the classical binary method of decision-making, so that the unknown and identical failure signatures can be treated to improve the robustness. This approach consists of associating the evaluated residuals and the components reliability data to build a Hybrid Bayesian Network. This network is used in two distinct inference procedures: one for the continuous part and the other for the discrete part. The continuous nodes of the network are the prior probabilities of the components failures, which are used by the inference procedure on the discrete part to compute the posterior probabilities of the failures. The developed methodology is applied to a real steam generator pilot process.

Software Process Improvement: A Organizational Change that Need to be Managed and Motivated

As seen in literature, about 70% of the improvement initiatives fail, and a significant number do not even get started. This paper analyses the problem of failing initiatives on Software Process Improvement (SPI), and proposes good practices supported by motivational tools that can help minimizing failures. It elaborates on the hypothesis that human factors are poorly addressed by deployers, especially because implementation guides usually emphasize only technical factors. This research was conducted with SPI deployers and analyses 32 SPI initiatives. The results indicate that although human factors are not commonly highlighted in guidelines, the successful initiatives usually address human factors implicitly. This research shows that practices based on human factors indeed perform a crucial role on successful implantations of SPI, proposes change management as a theoretical framework to introduce those practices in the SPI context and suggests some motivational tools based on SPI deployers experience to support it.

Implementation of Watch Dog Timer for Fault Tolerant Computing on Cluster Server

In today-s new technology era, cluster has become a necessity for the modern computing and data applications since many applications take more time (even days or months) for computation. Although after parallelization, computation speeds up, still time required for much application can be more. Thus, reliability of the cluster becomes very important issue and implementation of fault tolerant mechanism becomes essential. The difficulty in designing a fault tolerant cluster system increases with the difficulties of various failures. The most imperative obsession is that the algorithm, which avoids a simple failure in a system, must tolerate the more severe failures. In this paper, we implemented the theory of watchdog timer in a parallel environment, to take care of failures. Implementation of simple algorithm in our project helps us to take care of different types of failures; consequently, we found that the reliability of this cluster improves.

A Fast Sensor Relocation Algorithm in Wireless Sensor Networks

Sensor relocation is to repair coverage holes caused by node failures. One way to repair coverage holes is to find redundant nodes to replace faulty nodes. Most researches took a long time to find redundant nodes since they randomly scattered redundant nodes around the sensing field. To record the precise position of sensor nodes, most researches assumed that GPS was installed in sensor nodes. However, high costs and power-consumptions of GPS are heavy burdens for sensor nodes. Thus, we propose a fast sensor relocation algorithm to arrange redundant nodes to form redundant walls without GPS. Redundant walls are constructed in the position where the average distance to each sensor node is the shortest. Redundant walls can guide sensor nodes to find redundant nodes in the minimum time. Simulation results show that our algorithm can find the proper redundant node in the minimum time and reduce the relocation time with low message complexity.