Abstract: This research is a collaborative narrative research, which is mixed with issues of selected papers and researcher's experience as an anonymous user on social networking sites. The objective of this research is to understand the reasons of the regular users who reject to contact with anonymous users, and to study the communication traditions used in the selected studies. Anonymous users are rejected by regular users, because of the fear of cyber bully, the fear of unpleasant behaviors, and unwillingness of changing communication norm. The suggestion for future research design is to use longitudinal design or quantitative design; and the theory in rhetorical tradition should be able to help develop a strong trust message.
Abstract: We have developed a computer program consisting of
6 subtests assessing the children hand dexterity applicable in the
rehabilitation medicine. We have carried out a normative study on a
representative sample of 285 children aged from 7 to 15 (mean age
11.3) and we have proposed clinical standards for three age groups
(7-9, 9-11, 12-15 years). We have shown statistical significance of
differences among the corresponding mean values of the task time
completion. We have also found a strong correlation between the task
time completion and the age of the subjects, as well as we have
performed the test-retest reliability checks in the sample of 84
children, giving the high values of the Pearson coefficients for the
dominant and non-dominant hand in the range 0.740.97 and
0.620.93, respectively.
A new MATLAB-based programming tool aiming at analysis of
cardiologic RR intervals and blood pressure descriptors, is worked
out, too. For each set of data, ten different parameters are extracted: 2
in time domain, 4 in frequency domain and 4 in Poincaré plot
analysis. In addition twelve different parameters of baroreflex
sensitivity are calculated. All these data sets can be visualized in time
domain together with their power spectra and Poincaré plots. If
available, the respiratory oscillation curves can be also plotted for
comparison. Another application processes biological data obtained
from BLAST analysis.
Abstract: This paper provides a framework in order to
incorporate reliability issue as a sign of disruption in distribution
systems and partial covering theory as a response to limitation in
coverage radios and economical preferences, simultaneously into the
traditional literatures of capacitated facility location problems. As a
result we develop a bi-objective model based on the discrete
scenarios for expected cost minimization and demands coverage
maximization through a three echelon supply chain network by
facilitating multi-capacity levels for provider side layers and
imposing gradual coverage function for distribution centers (DCs).
Additionally, in spite of objectives aggregation for solving the model
through LINGO software, a branch of LP-Metric method called Min-
Max approach is proposed and different aspects of corresponds
model will be explored.
Abstract: Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Abstract: Bootstrapping has gained popularity in different tests of hypotheses as an alternative in using asymptotic distribution if one is not sure of the distribution of the test statistic under a null hypothesis. This method, in general, has two variants – the parametric and the nonparametric approaches. However, issues on reliability of this method always arise in many applications. This paper addresses the issue on reliability by establishing a reliability measure in terms of quantiles with respect to asymptotic distribution, when this is approximately correct. The test of hypotheses used is Ftest. The simulated results show that using nonparametric bootstrapping in F-test gives better reliability than parametric bootstrapping with relatively higher degrees of freedom.
Abstract: The fault current levels through the electric devices
have a significant impact on failure probability. New fault current
results in exceeding the rated capacity of circuit breaker and switching
equipments and changes operation characteristic of overcurrent relay.
In order to solve these problems, SFCL (Superconducting Fault
Current Limiter) has rising as one of new alternatives so as to improve
these problems. A fault current reduction differs depending on
installed location. Therefore, a location of SFCL is very important.
Also, SFCL decreases the fault current, and it prevents surrounding
protective devices to be exposed to fault current, it then will bring a
change of reliability. In this paper, we propose method which
determines the optimal location when SFCL is installed in power
system. In addition, the reliability about the power system which
SFCL was installed is evaluated. The efficiency and effectiveness of
this method are also shown by numerical examples and the reliability
indices are evaluated in this study at each load points. These results
show a reliability change of a system when SFCL was installed.
Abstract: Due to the fact that in the new century customers tend
to express globally increasing demands, networks of interconnected
businesses have been established in societies and the management of
such networks seems to be a major key through gaining competitive
advantages. Supply chain management encompasses such managerial
activities. Within a supply chain, a critical role is played by quality.
QFD is a widely-utilized tool which serves the purpose of not only
bringing quality to the ultimate provision of products or service
packages required by the end customer or the retailer, but it can also
initiate us into a satisfactory relationship with our initial customer;
that is the wholesaler. However, the wholesalers- cooperation is
considerably based on the capabilities that are heavily dependent on
their locations and existing circumstances. Therefore, it is undeniable
that for all companies each wholesaler possesses a specific
importance ratio which can heavily influence the figures calculated in
the House of Quality in QFD. Moreover, due to the competitiveness
of the marketplace today, it-s been widely recognized that
consumers- expression of demands has been highly volatile in
periods of production. Apparently, such instability and proneness to
change has been very tangibly noticed and taking it into account
during the analysis of HOQ is widely influential and doubtlessly
required. For a more reliable outcome in such matters, this article
demonstrates the application viability of Analytic Network Process
for considering the wholesalers- reputation and simultaneously
introduces a mortality coefficient for the reliability and stability of
the consumers- expressed demands in course of time. Following to
this, the paper provides further elaboration on the relevant
contributory factors and approaches through the calculation of such
coefficients. In the end, the article concludes that an empirical
application is needed to achieve broader validity.
Abstract: Existing proceeding-models for the development of mechatronic systems provide a largely parallel action in the detailed development. This parallel approach is to take place also largely independent of one another in the various disciplines involved. An approach for a new proceeding-model provides a further development of existing models to use for the development of Adaptronic Systems. This approach is based on an intermediate integration and an abstract modeling of the adaptronic system. Based on this system-model a simulation of the global system behavior, due to external and internal factors or Forces is developed. For the intermediate integration a special data management system is used. According to the presented approach this data management system has a number of functions that are not part of the "normal" PDM functionality. Therefore a concept for a new data management system for the development of Adaptive system is presented in this paper. This concept divides the functions into six layers. In the first layer a system model is created, which divides the adaptronic system based on its components and the various technical disciplines. Moreover, the parameters and properties of the system are modeled and linked together with the requirements and the system model. The modeled parameters and properties result in a network which is analyzed in the second layer. From this analysis necessary adjustments to individual components for specific manipulation of the system behavior can be determined. The third layer contains an automatic abstract simulation of the system behavior. This simulation is a precursor for network analysis and serves as a filter. By the network analysis and simulation changes to system components are examined and necessary adjustments to other components are calculated. The other layers of the concept treat the automatic calculation of system reliability, the "normal" PDM-functionality and the integration of discipline-specific data into the system model. A prototypical implementation of an appropriate data management with the addition of an automatic system development is being implemented using the data management system ENOVIA SmarTeam V5 and the simulation system MATLAB.
Abstract: This paper focuses on the probabilistic numerical
solution of the problems in biomechanics and mining. Applications of
Simulation-Based Reliability Assessment (SBRA) Method are
presented in the solution of designing of the external fixators applied
in traumatology and orthopaedics (these fixators can be applied for
the treatment of open and unstable fractures etc.) and in the solution
of a hard rock (ore) disintegration process (i.e. the bit moves into the
ore and subsequently disintegrates it, the results are compared with
experiments, new design of excavation tool is proposed.
Abstract: The Virtual Reality (VR) is becoming increasingly
important for business, education, and entertainment, therefore VR
technology have been applied for training purposes in the areas of
military, safety training and flying simulators. In particular, the
superior and high reliability VR training system is very important in
immersion. Manipulation training in immersive virtual environments
is difficult partly because users must do without the hap contact with
real objects they rely on in the real world to orient themselves and
their manipulated.
In this paper, we create a convincing questionnaire of immersion
and an experiment to assess the influence of immersion on
performance in VR training system. The Immersion Questionnaire
(IQ) included spatial immersion, Psychological immersion, and
Sensory immersion. We show that users with a training system
complete visual attention and detection of signals. Twenty subjects
were allocated to a factorial design consisting of two different VR
systems (Desktop VR and Projector VR). The results indicated that
different VR representation methods significantly affected the
participants- Immersion dimensions.
Abstract: Characteristics of ad hoc networks and even their existence depend on the nodes forming them. Thus, services and applications designed for ad hoc networks should adapt to this dynamic and distributed environment. In particular, multicast algorithms having reliability and scalability requirements should abstain from centralized approaches. We aspire to define a reliable and scalable multicast protocol for ad hoc networks. Our target is to utilize epidemic techniques for this purpose. In this paper, we present a brief survey of epidemic algorithms for reliable multicasting in ad hoc networks, and describe formulations and analytical results for simple epidemics. Then, P2P anti-entropy algorithm for content distribution and our prototype simulation model are described together with our initial results demonstrating the behavior of the algorithm.
Abstract: Sedimentation formation is a complex hydraulic phenomenon that has emerged as a major operational and maintenance consideration in modern hydraulic engineering in general and river engineering in particular. Sediments accumulation along the river course and their eventual storage in a form of islands affect water intake in the canal systems that are fed by the storage reservoirs. Without proper management, sediment transport can lead to major operational challenges in water distribution system of arid regions like the Dez and Hamidieh command areas. The paper aims to investigate sedimentation in the Western Canal of Dez Diversion Weir using the SHARC model and compare the results with the two intake structures of the Hamidieh dam in Iran using SSIIM model. The objective was to identify the factors which influence the process, check reliability of outcome and provide ways in which to mitigate the implications on operation and maintenance of the structures. Results estimated sand and silt bed loads concentrations to be 193 ppm and 827ppm respectively. This followed ,ore or less similar pattern in Hamidieh where the sediment formation impeded water intake in the canal system. Given the available data on average annual bed loads and average suspended sediment loads of 165ppm and 837ppm in the Dez, there was a significant statistical difference (16%) between the sand grains, whereas no significant difference (1.2%) was find in the silt grain sizes. One explanation for such finding being that along the 6 Km river course there was considerable meandering effects which explains recent shift in the hydraulic behavior along the stream course under investigation. The sand concentration in downstream relative to present state of the canal showed a steep descending curve. Sediment trapping on the other hand indicated a steep ascending curve. These occurred because the diversion weir was not considered in the simulation model. The comparative study showed very close similarities in the results which explains the fact that both software can be used as accurate and reliable analytical tools for simulation of the sedimentation in hydraulic engineering.
Abstract: In this paper an approaches for increasing the
effectiveness of error detection in computer network channels with
Pulse-Amplitude Modulation (PAM) has been proposed. Proposed
approaches are based on consideration of special feature of errors,
which are appearances in line with PAM. The first approach consists
of CRC modification specifically for line with PAM. The second
approach is base of weighted checksums using. The way for
checksum components coding has been developed. It has been shown
that proposed checksum modification ensure superior digital data
control transformation reliability for channels with PAM in compare
to CRC.
Abstract: CONWIP (constant work-in-process) as a pull
production system have been widely studied by researchers to date.
The CONWIP pull production system is an alternative to pure push
and pure pull production systems. It lowers and controls inventory
levels which make the throughput better, reduces production lead
time, delivery reliability and utilization of work. In this article a
CONWIP pull production system was simulated. It was simulated
push and pull planning system. To compare these systems via a
production planning system (PPS) game were adjusted parameters of
each production planning system. The main target was to reduce the
total WIP and achieve throughput and delivery reliability to
minimum values. Data was recorded and evaluated. A future state
was made for real production of plastic components and the setup of
the two indicators with CONWIP pull production system which can
greatly help the company to be more competitive on the market.
Abstract: Recently, distributed generation technologies have received much attention for the potential energy savings and reliability assurances that might be achieved as a result of their widespread adoption. Fueling the attention have been the possibilities of international agreements to reduce greenhouse gas emissions, electricity sector restructuring, high power reliability requirements for certain activities, and concern about easing transmission and distribution capacity bottlenecks and congestion. So it is necessary that impact of these kinds of generators on distribution feeder reconfiguration would be investigated. This paper presents an approach for distribution reconfiguration considering Distributed Generators (DGs). The objective function is summation of electrical power losses A Tabu search optimization is used to solve the optimal operation problem. The approach is tested on a real distribution feeder.
Abstract: In this paper, a reliable cooperative multipath routing
algorithm is proposed for data forwarding in wireless sensor networks
(WSNs). In this algorithm, data packets are forwarded towards the
base station (BS) through a number of paths, using a set of relay
nodes. In addition, the Rayleigh fading model is used to calculate
the evaluation metric of links. Here, the quality of reliability is
guaranteed by selecting optimal relay set with which the probability
of correct packet reception at the BS will exceed a predefined
threshold. Therefore, the proposed scheme ensures reliable packet
transmission to the BS. Furthermore, in the proposed algorithm,
energy efficiency is achieved by energy balancing (i.e. minimizing
the energy consumption of the bottleneck node of the routing path)
at the same time. This work also demonstrates that the proposed
algorithm outperforms existing algorithms in extending longevity of
the network, with respect to the quality of reliability. Given this, the
obtained results make possible reliable path selection with minimum
energy consumption in real time.
Abstract: In today-s new technology era, cluster has become a
necessity for the modern computing and data applications since many
applications take more time (even days or months) for computation.
Although after parallelization, computation speeds up, still time
required for much application can be more. Thus, reliability of the
cluster becomes very important issue and implementation of fault
tolerant mechanism becomes essential. The difficulty in designing a
fault tolerant cluster system increases with the difficulties of various
failures. The most imperative obsession is that the algorithm, which
avoids a simple failure in a system, must tolerate the more severe
failures. In this paper, we implemented the theory of watchdog timer
in a parallel environment, to take care of failures. Implementation of
simple algorithm in our project helps us to take care of different
types of failures; consequently, we found that the reliability of this
cluster improves.
Abstract: Consumer electronics are pervasive. It is impossible to
imagine a household or office without DVD players, digital cameras,
printers, mobile phones, shavers, electrical toothbrushes, etc. All
these devices operate at different voltage levels ranging from 1.8 to
20 VDC, in the absence of universal standards. The voltages
available are however usually 120/230 VAC at 50/60 Hz. This
situation makes an individual electrical energy conversion system
necessary for each device. Such converters usually involve several
conversion stages and often operate with excessive losses and poor
reliability. The aim of the project presented in this paper is to design
and implement a multi-channel DC/DC converter system,
customizing the output voltage and current ratings according to the
requirements of the load. Distributed, multi-agent techniques will be
applied for the control of the DC/DC converters.
Abstract: The continuity in the electric supply of the electric installations is becoming one of the main requirements of the electric supply network (generation, transmission, and distribution of the electric energy). The achievement of this requirement depends from one side on the structure of the electric network and on the other side on the avaibility of the reserve source provided to maintain the supply in case of failure of the principal one. The avaibility of supply does not only depends on the reliability parameters of the both sources (principal and reserve) but it also depends on the reliability of the circuit breaker which plays the role of interlocking the reserve source in case of failure of the principal one. In addition, the principal source being under operation, its control can be ideal and sure, however, for the reserve source being in stop, a preventive maintenances which proceed on time intervals (periodicity) and for well defined lengths of time are envisaged, so that this source will always available in case of the principal source failure. The choice of the periodicity of preventive maintenance of the source of reserve influences directly the reliability of the electric feeder system In this work and on the basis of the semi- markovian's processes, the influence of the time of interlocking the reserve source upon the reliability of an industrial electric network is studied and is given the optimal time of interlocking the reserve source in case of failure the principal one, also the influence of the periodicity of the preventive maintenance of the source of reserve is studied and is given the optimal periodicity.
Abstract: This paper tries to represent a new method for
computing the reliability of a system which is arranged in series or
parallel model. In this method we estimate life distribution function
of whole structure using the asymptotic Extreme Value (EV)
distribution of Type I, or Gumbel theory. We use EV distribution in
minimal mode, for estimate the life distribution function of series
structure and maximal mode for parallel system. All parameters also
are estimated by Moments method. Reliability function and failure
(hazard) rate and p-th percentile point of each function are
determined. Other important indexes such as Mean Time to Failure
(MTTF), Mean Time to repair (MTTR), for non-repairable and
renewal systems in both of series and parallel structure will be
computed.