Abstract: Modern highly automated production systems faces
problems of reliability. Machine function reliability results in
changes of productivity rate and efficiency use of expensive
industrial facilities. Predicting of reliability has become an important
research and involves complex mathematical methods and
calculation. The reliability of high productivity technological
automatic machines that consists of complex mechanical, electrical
and electronic components is important. The failure of these units
results in major economic losses of production systems. The
reliability of transport and feeding systems for automatic
technological machines is also important, because failure of transport
leads to stops of technological machines. This paper presents
reliability engineering on the feeding system and its components for
transporting a complex shape parts to automatic machines. It also
discusses about the calculation of the reliability parameters of the
feeding unit by applying the probability theory. Equations produced
for calculating the limits of the geometrical sizes of feeders and the
probability of sticking the transported parts into the chute represents
the reliability of feeders as a function of its geometrical parameters.
Abstract: Process measurement is the task of empirically and objectively assigning numbers to the properties of business processes in such a way as to describe them. Desirable attributes to study and measure include complexity, cost, maintainability, and reliability. In our work we will focus on investigating process complexity. We define process complexity as the degree to which a business process is difficult to analyze, understand or explain. One way to analyze a process- complexity is to use a process control-flow complexity measure. In this paper, an attempt has been made to evaluate the control-flow complexity measure in terms of Weyuker-s properties. Weyuker-s properties must be satisfied by any complexity measure to qualify as a good and comprehensive one.
Abstract: Early Intervention Program (EIP) is required to
improve the overall development of children with Trisomy 21 (Down
syndrome). In order to help trainer and parent in the implementation
of EIP, a support system has been developed. The support system is
able to screen data automatically, store and analyze data, generate
individual EIP (curriculum) with optimal training duration and to
generate training automatically. The system consists of hardware and
software where the software has been implemented using Java
language and Linux Fedora. The software has been tested to ensure the
functionality and reliability. The prototype has been also tested in
Down syndrome centers. Test result shows that the system is reliable
to be used for generation of an individual curriculum which includes
the training program to improve the motor, cognitive, and combination
abilities of Down syndrome children under 6 years.
Abstract: In general fuzzy sets are used to analyze the fuzzy
system reliability. Here intuitionistic fuzzy set theory for analyzing
the fuzzy system reliability has been used. To analyze the fuzzy
system reliability, the reliability of each component of the system as
a triangular intuitionistic fuzzy number is considered. Triangular
intuitionistic fuzzy number and their arithmetic operations are
introduced. Expressions for computing the fuzzy reliability of a
series system and a parallel system following triangular intuitionistic
fuzzy numbers have been described. Here an imprecise reliability
model of an electric network model of dark room is taken. To
compute the imprecise reliability of the above said system, reliability
of each component of the systems is represented by triangular
intuitionistic fuzzy numbers. Respective numerical example is
presented.
Abstract: Cloud Computing is an approach that provides computation and storage services on-demand to clients over the network, independent of device and location. In the last few years, cloud computing became a trend in information technology with many companies that transfer their business processes and applications in the cloud. Cloud computing with service oriented architecture has contributed to rapid development of Geographic Information Systems. Open Geospatial Consortium with its standards provides the interfaces for hosted spatial data and GIS functionality to integrated GIS applications. Furthermore, with the enormous processing power, clouds provide efficient environment for data intensive applications that can be performed efficiently, with higher precision, and greater reliability. This paper presents our work on the geospatial data services within the cloud computing environment and its technology. A cloud computing environment with the strengths and weaknesses of the geographic information system will be introduced. The OGC standards that solve our application interoperability are highlighted. Finally, we outline our system architecture with utilities for requesting and invoking our developed data intensive applications as a web service.
Abstract: Position based routing protocols are the kinds of
routing protocols, which they use of nodes location information,
instead of links information to routing. In position based routing
protocols, it supposed that the packet source node has position
information of itself and it's neighbors and packet destination node.
Greedy is a very important position based routing protocol. In one of
it's kinds, named MFR (Most Forward Within Radius), source node
or packet forwarder node, sends packet to one of it's neighbors with
most forward progress towards destination node (closest neighbor to
destination). Using distance deciding metric in Greedy to forward
packet to a neighbor node, is not suitable for all conditions. If closest
neighbor to destination node, has high speed, in comparison with
source node or intermediate packet forwarder node speed or has very
low remained battery power, then packet loss probability is
increased. Proposed strategy uses combination of metrics distancevelocity
similarity-power, to deciding about giving the packet to
which neighbor. Simulation results show that the proposed strategy
has lower lost packets average than Greedy, so it has more reliability.
Abstract: This research is a collaborative narrative research, which is mixed with issues of selected papers and researcher's experience as an anonymous user on social networking sites. The objective of this research is to understand the reasons of the regular users who reject to contact with anonymous users, and to study the communication traditions used in the selected studies. Anonymous users are rejected by regular users, because of the fear of cyber bully, the fear of unpleasant behaviors, and unwillingness of changing communication norm. The suggestion for future research design is to use longitudinal design or quantitative design; and the theory in rhetorical tradition should be able to help develop a strong trust message.
Abstract: The fault current levels through the electric devices
have a significant impact on failure probability. New fault current
results in exceeding the rated capacity of circuit breaker and switching
equipments and changes operation characteristic of overcurrent relay.
In order to solve these problems, SFCL (Superconducting Fault
Current Limiter) has rising as one of new alternatives so as to improve
these problems. A fault current reduction differs depending on
installed location. Therefore, a location of SFCL is very important.
Also, SFCL decreases the fault current, and it prevents surrounding
protective devices to be exposed to fault current, it then will bring a
change of reliability. In this paper, we propose method which
determines the optimal location when SFCL is installed in power
system. In addition, the reliability about the power system which
SFCL was installed is evaluated. The efficiency and effectiveness of
this method are also shown by numerical examples and the reliability
indices are evaluated in this study at each load points. These results
show a reliability change of a system when SFCL was installed.
Abstract: In this paper, a reliable cooperative multipath routing
algorithm is proposed for data forwarding in wireless sensor networks
(WSNs). In this algorithm, data packets are forwarded towards the
base station (BS) through a number of paths, using a set of relay
nodes. In addition, the Rayleigh fading model is used to calculate
the evaluation metric of links. Here, the quality of reliability is
guaranteed by selecting optimal relay set with which the probability
of correct packet reception at the BS will exceed a predefined
threshold. Therefore, the proposed scheme ensures reliable packet
transmission to the BS. Furthermore, in the proposed algorithm,
energy efficiency is achieved by energy balancing (i.e. minimizing
the energy consumption of the bottleneck node of the routing path)
at the same time. This work also demonstrates that the proposed
algorithm outperforms existing algorithms in extending longevity of
the network, with respect to the quality of reliability. Given this, the
obtained results make possible reliable path selection with minimum
energy consumption in real time.
Abstract: Consumer electronics are pervasive. It is impossible to
imagine a household or office without DVD players, digital cameras,
printers, mobile phones, shavers, electrical toothbrushes, etc. All
these devices operate at different voltage levels ranging from 1.8 to
20 VDC, in the absence of universal standards. The voltages
available are however usually 120/230 VAC at 50/60 Hz. This
situation makes an individual electrical energy conversion system
necessary for each device. Such converters usually involve several
conversion stages and often operate with excessive losses and poor
reliability. The aim of the project presented in this paper is to design
and implement a multi-channel DC/DC converter system,
customizing the output voltage and current ratings according to the
requirements of the load. Distributed, multi-agent techniques will be
applied for the control of the DC/DC converters.
Abstract: Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Abstract: Although so far, many methods for ranking fuzzy numbers
have been discussed broadly, most of them contained some shortcomings,
such as requirement of complicated calculations, inconsistency
with human intuition and indiscrimination. The motivation of
this study is to develop a model for ranking fuzzy numbers based
on the lexicographical ordering which provides decision-makers with
a simple and efficient algorithm to generate an ordering founded on
a precedence. The main emphasis here is put on the ease of use
and reliability. The effectiveness of the proposed method is finally
demonstrated by including a comprehensive comparing different
ranking methods with the present one.
Abstract: This paper proposes, implements and evaluates an original discretization method for continuous random variables, in order to estimate the reliability of systems for which stress and strength are defined as complex functions, and whose reliability is not derivable through analytic techniques. This method is compared to other two discretizing approaches appeared in literature, also through a comparative study involving four engineering applications. The results show that the proposal is very efficient in terms of closeness of the estimates to the true (simulated) reliability. In the study we analyzed both a normal and a non-normal distribution for the random variables: this method is theoretically suitable for each parametric family.
Abstract: The evaluation of residual reliability of large sized
parallel computer interconnection systems is not practicable with
the existing methods. Under such conditions, one must go for
approximation techniques which provide the upper bound and lower
bound on this reliability. In this context, a new approximation method
for providing bounds on residual reliability is proposed here. The
proposed method is well supported by two algorithms for simulation
purpose. The bounds on residual reliability of three different categories
of interconnection topologies are efficiently found by using
the proposed method
Abstract: The purpose of this study was to develop a “teachers’
self-efficacy scale for high school physical education teachers
(TSES-HSPET)” in Taiwan. This scale is based on the self-efficacy
theory of Bandura [1], [2]. This study used exploratory and
confirmatory factor analyses to test the reliability and validity. The
participants were high school physical education teachers in Taiwan.
Both stratified random sampling and cluster sampling were used to
sample participants for the study. 350 teachers were sampled in the
first stage and 234 valid scales (male 133, female 101) returned.
During the second stage, 350 teachers were sampled and 257 valid
scales (male 143, female 110, 4 did not indicate gender) returned. The
exploratory factor analysis was used in the first stage, and it got
60.77% of total variance for construct validity. The Cronbach’s alpha
coefficient of internal consistency was 0.91 for sumscale, and
subscales were 0.84 and 0.90. In the second stage, confirmatory factor
analysis was used to test construct validity. The result showed that the
fit index could be accepted (χ2 (75) =167.94, p
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: The utilization of renewable energy sources in electric
power systems is increasing quickly because of public apprehensions
for unpleasant environmental impacts and increase in the energy
costs involved with the use of conventional energy sources. Despite
the application of these energy sources can considerably diminish the
system fuel costs, they can also have significant influence on the
system reliability. Therefore an appropriate combination of the
system reliability indices level and capital investment costs of system
is vital. This paper presents a hybrid wind/photovoltaic plant, with
the aim of supplying IEEE reliability test system load pattern while
the plant capital investment costs is minimized by applying a hybrid
particle swarm optimization (PSO) / harmony search (HS) approach,
and the system fulfills the appropriate level of reliability.
Abstract: Today-s manufacturing companies are facing multiple and dynamic customer-supplier-relationships embedded in nonhierarchical production networks. This complex environment leads to problems with delivery reliability and wasteful turbulences throughout the entire network. This paper describes an operational model based on a theoretical framework which improves delivery reliability of each individual customer-supplier-relationship within non-hierarchical production networks of the European machinery and equipment industry. By developing a non-centralized coordination mechanism based on determining the value of delivery reliability and derivation of an incentive system for suppliers the number of in time deliveries can be increased and thus the turbulences in the production network smoothened. Comparable to an electronic stock exchange the coordination mechanism will transform the manual and nontransparent process of determining penalties for delivery delays into an automated and transparent market mechanism creating delivery reliability.