Abstract: The fault current levels through the electric devices
have a significant impact on failure probability. New fault current
results in exceeding the rated capacity of circuit breaker and switching
equipments and changes operation characteristic of overcurrent relay.
In order to solve these problems, SFCL (Superconducting Fault
Current Limiter) has rising as one of new alternatives so as to improve
these problems. A fault current reduction differs depending on
installed location. Therefore, a location of SFCL is very important.
Also, SFCL decreases the fault current, and it prevents surrounding
protective devices to be exposed to fault current, it then will bring a
change of reliability. In this paper, we propose method which
determines the optimal location when SFCL is installed in power
system. In addition, the reliability about the power system which
SFCL was installed is evaluated. The efficiency and effectiveness of
this method are also shown by numerical examples and the reliability
indices are evaluated in this study at each load points. These results
show a reliability change of a system when SFCL was installed.
Abstract: Recordings from recent earthquakes have provided evidence that ground motions in the near field of a rupturing fault differ from ordinary ground motions, as they can contain a large energy, or “directivity" pulse. This pulse can cause considerable damage during an earthquake, especially to structures with natural periods close to those of the pulse. Failures of modern engineered structures observed within the near-fault region in recent earthquakes have revealed the vulnerability of existing RC buildings against pulse-type ground motions. This may be due to the fact that these modern structures had been designed primarily using the design spectra of available standards, which have been developed using stochastic processes with relatively long duration that characterizes more distant ground motions. Many recently designed and constructed buildings may therefore require strengthening in order to perform well when subjected to near-fault ground motions. Fiber Reinforced Polymers are considered to be a viable alternative, due to their relatively easy and quick installation, low life cycle costs and zero maintenance requirements. The objective of this paper is to investigate the adequacy of Artificial Neural Networks (ANN) to determine the three dimensional dynamic response of FRP strengthened RC buildings under the near-fault ground motions. For this purpose, one ANN model is proposed to estimate the base shear force, base bending moments and roof displacement of buildings in two directions. A training set of 168 and a validation set of 21 buildings are produced from FEA analysis results of the dynamic response of RC buildings under the near-fault earthquakes. It is demonstrated that the neural network based approach is highly successful in determining the response.
Abstract: The objective of this study is to evaluate the threshold
stress of the clay with sand subgrade soil. Threshold stress can be
defined as the stress level above which cyclic loading leads to
excessive deformation and eventual failure. The thickness
determination of highways formations using the threshold stress
approach is a more realistic assessment of the soil behaviour because
it is subjected to repeated loadings from moving vehicles. Threshold
stress can be evaluated by plastic strain criterion, which is based on
the accumulated plastic strain behaviour during cyclic loadings [1].
Several conditions of the all-round pressure the subgrade soil namely,
zero confinement, low all-round pressure and high all-round pressure
are investigated. The threshold stresses of various soil conditions are
determined. Threshold stress of the soil are 60%, 31% and 38.6% for
unconfined partially saturated sample, low effective stress saturated
sample, high effective stress saturated sample respectively.
Abstract: This paper provides a replacement policy for warranty products with different failure rate from the consumer-s viewpoint. Assume that the product is replaced once within a finite planning horizon, and the failure rate of the second product is lower than the failure rate of the first product. Within warranty period (WP), the failed product is corrected by minimal repair without any cost to the consumers. After WP, the failed product is repaired with a fixed repair cost to the consumers. However, each failure incurs a fixed downtime cost to the consumers over a finite planning horizon. In this paper, we derive the model of the expected total disbursement cost within a finite planning horizon and some properties of the optimal replacement policy under some reasonable conditions are obtained. Finally, numerical examples are given to illustrate the features of the optimal replacement policy under various maintenance costs.
Abstract: A therapeutic success is the aim of any therapeutic
intervention, but a therapeutic failure is the other side of the same
coin. The purpose of this study is to present the activity of a personal
development group, composed of 14 participants (psychologists,
doctors and a priest) registered for a 2 days course of integrative
psychotherapy. The objectives of this study are centred on: the
management of the personal development group breaking moment
realized by the therapist/trainer; the analysis of the trainer’s personal
situation and of some group participants and the brief presentation of
the main work methods applied on participants in the repairing of the
therapeutic relation and in the counter transfer management. The
therapist’s orientation is an integrative one and the demarche realized
includes T.A. techniques, role play, Gestalt and family systemic
psychotherapy. The conclusions obtained represent landmarks for the
future activity within that group and strengthen the therapeutic
relation with the group.
Abstract: Grazing and pastoral overloading through human factors result in significant land desertification. Failure to take into account the phenomenon of desertification as a serious problem can lead to an environmental disaster because of the damages caused by land encroachment. Therefore, soil on residential and urban areas is affected because of the deterioration of vegetation. Overgrazing or grazing in open and irregular lands is practiced in these areas almost throughout the year, especially during the growth cycle of edible plants, thereby leading to their disappearance. In addition, the large number of livestock in these areas exceeds the capacity of these pastures because of pastoral land overloading, which results in deterioration and desertification in the region. In addition, rare plants, the extinction of some edible plants in the region, and the emergence of plants unsuitable for grazing, must be taken into consideration, as along with the emergence of dust and sand storms during the dry seasons (summer to autumn) due to the degradation of vegetation. These results show that strategic plans and regulations that protect the environment from desertification must be developed. Therefore, increased pastoral load is a key human factor in the deterioration of vegetation cover, leading to land desertification in this region.
Abstract: This study focuses on teamwork in Finnish working
life. Through a wide cross-section of teams the study examines the
causes to which team members attribute the outcomes of their teams.
Qualitative data was collected from 314 respondents. They wrote 616
stories to describe memorable experiences of success and failure in
teamwork. The stories revealed 1930 explanations. The findings
indicate that both favorable and unfavorable team outcomes are
perceived as being caused by the characteristics of team members,
relationships between members, team communication, team
structure, team goals, team leadership, and external forces. The types
represent different attribution levels in the context of organizational
teamwork.
Abstract: Noise level has critical effects on the diagnostic
performance of signal-averaged electrocardiogram (SAECG), because
the true starting and end points of QRS complex would be masked by
the residual noise and sensitive to the noise level. Several studies and
commercial machines have used a fixed number of heart beats
(typically between 200 to 600 beats) or set a predefined noise level
(typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform
SAECG analysis. However different criteria or methods used to
perform SAECG would cause the discrepancies of the noise levels
among study subjects. According to the recommendations of 1991
ESC, AHA and ACC Task Force Consensus Document for the use of
SAECG, the determinations of onset and offset are related closely to
the mean and standard deviation of noise sample. Hence this study
would try to perform SAECG using consistent root-mean-square
(RMS) noise levels among study subjects and analyze the noise level
effects on SAECG. This study would also evaluate the differences
between normal subjects and chronic renal failure (CRF) patients in
the time-domain SAECG parameters.
The study subjects were composed of 50 normal Taiwanese and 20
CRF patients. During the signal-averaged processing, different RMS
noise levels were adjusted to evaluate their effects on three time
domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS
voltage of the last QRS 40 ms (RMS40), and (3) duration of the low
amplitude signals below 40 μV (LAS40). The study results
demonstrated that the reduction of RMS noise level can increase
fQRSD and LAS40 and decrease the RMS40, and can further increase
the differences of fQRSD and RMS40 between normal subjects and
CRF patients. The SAECG may also become abnormal due to the
reduction of RMS noise level. In conclusion, it is essential to establish
diagnostic criteria of SAECG using consistent RMS noise levels for
the reduction of the noise level effects.
Abstract: This article proposes a new methodology to be used by SMEs (Small and Medium enterprises) to characterize their performance in quality, highlighting weaknesses and area for improvement. The methodology aims to identify the principal causes of quality problems and help to prioritize improvement initiatives. This is a self-assessment methodology that intends to be easy to implement by companies with low maturity level in quality. The methodology is organized in six different steps which includes gathering information about predetermined processes and subprocesses of quality management, defined based on the well-known Juran-s trilogy for quality management (Quality planning, quality control and quality improvement) and, predetermined results categories, defined based on quality concept. A set of tools for data collecting and analysis, such as interviews, flowcharts, process analysis diagrams and Failure Mode and effects Analysis (FMEA) are used. The article also presents the conclusions obtained in the application of the methodology in two cases studies.
Abstract: An optical fault monitoring in FTTH-PON using ACS
is demonstrated. This device can achieve real-time fault monitoring
for protection feeder fiber. In addition, the ACS can distinguish
optical fiber fault from the transmission services to other customers
in the FTTH-PON. It is essential to use a wavelength different from
the triple-play services operating wavelengths for failure detection.
ACS is using the operating wavelength 1625 nm for monitoring and
failure detection control. Our solution works on a standard local area
network (LAN) using a specially designed hardware interfaced with a
microcontroller integrated Ethernet.
Abstract: This paper adopts a notion of expectation-perception
gap of systems users as information systems (IS) failure. Problems
leading to the expectation-perception gap are identified and modelled
as five interrelated discrepancies or gaps throughout the process of
information systems development (ISD). It describes an empirical
study on how systems developers and users perceive the size of each
gap and the extent to which each problematic issue contributes to the
gap. The key to achieving success in ISD is to keep the expectationperception
gap closed by closing all 5 pertaining gaps. The gap model
suggests that most factors in IS failure are related to organizational,
cognitive and social aspects of information systems design.
Organization requirement analysis, being the weakest link of IS
development, is particularly worthy of investigation.
Abstract: Information Technology (IT) projects are always
accompanied by various risks and because of high rate of failure in
such projects, managing risks in order to neutralize or at least
decrease their effects on the success of the project is strongly
essential. In this paper, fuzzy analytical hierarchy process (FAHP) is
exploited as a means of risk evaluation methodology to prioritize and
organize risk factors faced in IT projects. A real case of IT projects, a
project of design and implementation of an integrated information
system in a vehicle producing company in Iran is studied. Related
risk factors are identified and then expert qualitative judgments about
these factors are acquired. Translating these judgments to fuzzy
numbers and using them as an input to FAHP, risk factors are then
ranked and prioritized by FAHP in order to make project managers
aware of more important risks and enable them to adopt suitable
measures to deal with these highly devastative risks.
Abstract: This paper discusses the performance modeling and availability analysis of Yarn Dyeing System of a Textile Industry. The Textile Industry is a complex and repairable engineering system. Yarn Dyeing System of Textile Industry consists of five subsystems arranged in series configuration. For performance modeling and analysis of availability, a performance evaluating model has been developed with the help of mathematical formulation based on Markov-Birth-Death Process. The differential equations have been developed on the basis of Probabilistic Approach using a Transition Diagram. These equations have further been solved using normalizing condition in order to develop the steady state availability, a performance measure of the system concerned. The system performance has been further analyzed with the help of decision matrices. These matrices provide various availability levels for different combinations of failure and repair rates for various subsystems. The findings of this paper are therefore, considered to be useful for the analysis of availability and determination of the best possible maintenance strategies which can be implemented in future to enhance the system performance.
Abstract: An embedded system for SEU(single event upset) test
needs to be designed to prevent system failure by high-energy particles
during measuring SEU. SEU is a phenomenon in which the data is changed temporary in semiconductor device caused by high-energy particles. In this paper, we present an embedded system for
SRAM(static random access memory) SEU test. SRAMs are on the DUT(device under test) and it is separated from control board which
manages the DUT and measures the occurrence of SEU. It needs to
have considerations for preventing system failure while managing the
DUT and making an accurate measurement of SEUs. We measure the occurrence of SEUs from five different SRAMs at three different
cyclotron beam energies 30, 35, and 40MeV. The number of SEUs of SRAMs ranges from 3.75 to 261.00 in average.
Abstract: This study proposes a multi-response surface
optimization problem (MRSOP) for determining the proper choices
of a process parameter design (PPD) decision problem in a noisy
environment of a grease position process in an electronic industry.
The proposed models attempts to maximize dual process responses
on the mean of parts between failure on left and right processes. The
conventional modified simplex method and its hybridization of the
stochastic operator from the hunting search algorithm are applied to
determine the proper levels of controllable design parameters
affecting the quality performances. A numerical example
demonstrates the feasibility of applying the proposed model to the
PPD problem via two iterative methods. Its advantages are also
discussed. Numerical results demonstrate that the hybridization is
superior to the use of the conventional method. In this study, the
mean of parts between failure on left and right lines improve by
39.51%, approximately. All experimental data presented in this
research have been normalized to disguise actual performance
measures as raw data are considered to be confidential.
Abstract: Failure modes and effects analysis (FMEA) is an effective technique for preventing potential problems and actions needed to error cause removal. On the other hand, the oil producing companies paly a critical role in the oil industry of Iran as a developing country out of which, Sepahan Oil Co. has a considerable contribution. The aim of this research is to show how FMEA could be applied and improve the quality of products at Sepahan Oil Co. For this purpose, the four liter production line of the company has been selected for investigation. The findings imply that the application of FMEA has reduced the scraps from 50000 ppm to 5000 ppm and has resulted in a 0.92 percent decrease of the oil waste.
Abstract: In today-s new technology era, cluster has become a
necessity for the modern computing and data applications since many
applications take more time (even days or months) for computation.
Although after parallelization, computation speeds up, still time
required for much application can be more. Thus, reliability of the
cluster becomes very important issue and implementation of fault
tolerant mechanism becomes essential. The difficulty in designing a
fault tolerant cluster system increases with the difficulties of various
failures. The most imperative obsession is that the algorithm, which
avoids a simple failure in a system, must tolerate the more severe
failures. In this paper, we implemented the theory of watchdog timer
in a parallel environment, to take care of failures. Implementation of
simple algorithm in our project helps us to take care of different
types of failures; consequently, we found that the reliability of this
cluster improves.
Abstract: Acute kidney injury (AKI) is a new worldwide public
health problem. A diagnosis of this disease using creatinine is still a
problem in clinical practice. Therefore, a measurement of biomarkers
responsible for AKI has received much attention in the past couple
years. Cytokine interleukin-18 (IL-18) was reported as one of the
early biomarkers for AKI. The most commonly used method to
detect this biomarker is an immunoassay. This study used a planar
platform to perform an immunoassay using fluorescence for
detection. In this study, anti-IL-18 antibody was immobilized onto a
microscope slide using a covalent binding method. Make-up samples
were diluted at the concentration between 10 to 1000 pg/ml to create
a calibration curve. The precision of the system was determined
using a coefficient of variability (CV), which was found to be less
than 10%. The performance of this immunoassay system was
compared with the measurement from ELISA.
Abstract: Since the 1980s, banks and financial service institutions have been running in an endless race of innovation to cope with the advancing technology, the fierce competition, and the more sophisticated and demanding customers. In order to guide their innovation efforts, several researches were conducted to identify the success and failure factors of new financial services. These mainly included organizational factors, marketplace factors and new service development process factors. They almost all emphasized the importance of customer and market orientation as a response to the highly perceptual and intangible characteristics of financial services. However, they deemphasized the critical characteristics of high involvement of risk and close correlation with the economic conditions, a factor that heavily contributed to the Global financial Crisis of 2008. This paper reviews the success and failure factors of new financial services. It then adds new perspectives emerging from the analysis of the role of innovation in the global financial crisis.
Abstract: The continuity in the electric supply of the electric installations is becoming one of the main requirements of the electric supply network (generation, transmission, and distribution of the electric energy). The achievement of this requirement depends from one side on the structure of the electric network and on the other side on the avaibility of the reserve source provided to maintain the supply in case of failure of the principal one. The avaibility of supply does not only depends on the reliability parameters of the both sources (principal and reserve) but it also depends on the reliability of the circuit breaker which plays the role of interlocking the reserve source in case of failure of the principal one. In addition, the principal source being under operation, its control can be ideal and sure, however, for the reserve source being in stop, a preventive maintenances which proceed on time intervals (periodicity) and for well defined lengths of time are envisaged, so that this source will always available in case of the principal source failure. The choice of the periodicity of preventive maintenance of the source of reserve influences directly the reliability of the electric feeder system In this work and on the basis of the semi- markovian's processes, the influence of the time of interlocking the reserve source upon the reliability of an industrial electric network is studied and is given the optimal time of interlocking the reserve source in case of failure the principal one, also the influence of the periodicity of the preventive maintenance of the source of reserve is studied and is given the optimal periodicity.