Abstract: Knowledge is attributed to human whose problemsolving
behavior is subjective and complex. In today-s knowledge
economy, the need to manage knowledge produced by a community
of actors cannot be overemphasized. This is due to the fact that
actors possess some level of tacit knowledge which is generally
difficult to articulate. Problem-solving requires searching and sharing
of knowledge among a group of actors in a particular context.
Knowledge expressed within the context of a problem resolution
must be capitalized for future reuse. In this paper, an approach that
permits dynamic capitalization of relevant and reliable actors-
knowledge in solving decision problem following Economic
Intelligence process is proposed. Knowledge annotation method and
temporal attributes are used for handling the complexity in the
communication among actors and in contextualizing expressed
knowledge. A prototype is built to demonstrate the functionalities of
a collaborative Knowledge Management system based on this
approach. It is tested with sample cases and the result showed that
dynamic capitalization leads to knowledge validation hence
increasing reliability of captured knowledge for reuse. The system
can be adapted to various domains.
Abstract: The highly nonlinear characteristics of drying
processes have prompted researchers to seek new nonlinear control
solutions. However, the relation between the implementation
complexity, on-line processing complexity, reliability control
structure and controller-s performance is not well established. The
present paper proposes high performance nonlinear fuzzy controllers
for a real-time operation of a drying machine, being developed under
a consistent match between those issues. A PCI-6025E data
acquisition device from National Instruments® was used, and the
control system was fully designed with MATLAB® / SIMULINK
language. Drying parameters, namely relative humidity and
temperature, were controlled through MIMOs Hybrid Bang-bang+PI
(BPI) and Four-dimensional Fuzzy Logic (FLC) real-time-based
controllers to perform drying tests on biological materials. The
performance of the drying strategies was compared through several
criteria, which are reported without controllers- retuning. Controllers-
performance analysis has showed much better performance of FLC
than BPI controller. The absolute errors were lower than 8,85 % for
Fuzzy Logic Controller, about three times lower than the
experimental results with BPI control.
Abstract: The biomarker for colorectal cancer (CRC) is CEACAM-6 antigen (C6AG). Therefore, this study aims to develop a novel, simple and low-cost CEACAM-6 antigen immumosensor (C6AG-IMS), based on electrical impedance measurement, for precise determination of C6AG. A low-cost screen-printed graphite electrode was constructed and used as the sensor, with CEACAM-6 antibody (C6AB) immobilized on it. The procedures of sensor fabrication and antibody immobilization are simple and low-cost. Measurement of the electrical impedance at a definite frequency ranges (0.43 – 1.26 MHz) showed that the C6AG-IMS has an excellent linear (r2>0.9) response range (8.125 – 65 pg/mL), covering the normal physiological and pathological ranges of blood C6AG levels. Also, the C6AG-IMS has excellent reliability and validity, with the intraclass correlation coefficient being 0.97. In conclusion, a novel, simple, low-cost and reliable C6AG-IMS was designed and developed, being able to accurately determine blood C6AG levels in the range of pathological and normal physiological regions. The C6AG-IMS can provide a point-of-care and immediate screening results to the user at home.
Abstract: An advanced Monte Carlo simulation method, called Subset Simulation (SS) for the time-dependent reliability prediction for underground pipelines has been presented in this paper. The SS can provide better resolution for low failure probability level with efficient investigating of rare failure events which are commonly encountered in pipeline engineering applications. In SS method, random samples leading to progressive failure are generated efficiently and used for computing probabilistic performance by statistical variables. SS gains its efficiency as small probability event as a product of a sequence of intermediate events with larger conditional probabilities. The efficiency of SS has been demonstrated by numerical studies and attention in this work is devoted to scrutinise the robustness of the SS application in pipe reliability assessment. It is hoped that the development work can promote the use of SS tools for uncertainty propagation in the decision-making process of underground pipelines network reliability prediction.
Abstract: Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.
Abstract: This paper presents reliability evaluation techniques
which are applied in distribution system planning studies and
operation. Reliability of distribution systems is an important issue in
power engineering for both utilities and customers. Reliability is a
key issue in the design and operation of electric power distribution
systems and load. Reliability evaluation of distribution systems has
been the subject of many recent papers and the modeling and
evaluation techniques have improved considerably.
Abstract: The study was designed to develop a measurement of
the positive emotion regulation questionnaire (PERQ) that assesses
positive emotion regulation strategies through self-report. The 14
items developed for the surveying instrument of the study were based
upon literatures regarding elements of positive regulation strategies.
319 elementary students (age ranging from 12 to14) were recruited
among three public elementary schools to survey on their use of
positive emotion regulation strategies. Of 319 subjects, 20 invalid
questionnaire s yielded a response rate of 92%. The data collected
wasanalyzed through methods such as item analysis, factor analysis,
and structural equation models. In reference to the results from item
analysis, the formal survey instrument was reduced to 11 items. A
principal axis factor analysis with varimax was performed on
responses, resulting in a 2-factor equation (savoring strategy and
neutralizing strategy), which accounted for 55.5% of the total
variance. Then, the two-factor structure of scale was also identified by
structural equation models. Finally, the reliability coefficients of the
two factors were Cronbach-s α .92 and .74. Gender difference was
only found in savoring strategy. In conclusion, the positive emotion
regulation strategies questionnaire offers a brief, internally consistent,
and valid self-report measure for understanding the emotional
regulation strategies of children that may be useful to researchers and
applied professionals.
Abstract: The mixed oxide nuclear fuel (MOX) of U and Pu contains several percent of fission products and minor actinides, such as neptunium, americium and curium. It is important to determine accurately the decay heat from Curium isotopes as they contribute significantly in the MOX fuel. This heat generation can cause samples to melt very quickly if excessive quantities of curium are present. In the present paper, we introduce a new approach that can predict the decay heat from curium isotopes. This work is a part of the project funded by King Abdulaziz City of Science and Technology (KASCT), Long-Term Comprehensive National Plan for Science, Technology and Innovations, and take place in King Abdulaziz University (KAU), Saudi Arabia. The approach is based on the numerical solution of coupled linear differential equations that describe decays and buildups of many nuclides to calculate the decay heat produced after shutdown. Results show the consistency and reliability of the approach applied.
Abstract: The mechanism behind the electromigration and
thermomigration failure in flip-chip solder joints with Cu-pillar bumps
was investigated in this paper through using finite element method.
Hot spot and the current crowding occurrs in the upper corner of
copper column instead of solders of the common solder ball. The
simulation results show that the change in thermal gradient is
noticeable, which might greatly affect the reliability of solder joints
with Cu-pillar bumps under current stressing. When the average
applied current density is increased from 1×104 A/cm2 to 3×104 A/cm2
in solders, the thermal gradient would increase from 74 K/cm to 901
K/cm at an ambient temperature of 25°C. The force from thermal
gradient of 901 K/cm can nearly induce thermomigration by itself.
With the increase in applied current, the thermal gradient is growing. It
is proposed that thermomigration likely causes a serious reliability
issue for Cu column based interconnects.
Abstract: This research aims at development of the Multiple
Intelligences Measurement of Elementary Students. The structural
accuracy test and normality establishment are based on the Multiple
Intelligences Theory of Gardner. This theory consists of eight aspects
namely linguistics, logic and mathematics, visual-spatial relations,
body and movement, music, human relations, self-realization/selfunderstanding
and nature. The sample used in this research consists
of elementary school students (aged between 5-11 years). The size of
the sample group was determined by Yamane Table. The group has
2,504 students. Multistage Sampling was used. Basic statistical
analysis and construct validity testing were done using confirmatory
factor analysis. The research can be summarized as follows; 1.
Multiple Intelligences Measurement consisting of 120 items is
content-accurate. Internal consistent reliability according to the
method of Kuder-Richardson of the whole Multiple Intelligences
Measurement equals .91. The difficulty of the measurement test is
between .39-.83. Discrimination is between .21-.85. 2). The Multiple
Intelligences Measurement has construct validity in a good range,
that is 8 components and all 120 test items have statistical
significance level at .01. Chi-square value equals 4357.7; p=.00 at the
degree of freedom of 244 and Goodness of Fit Index equals 1.00.
Adjusted Goodness of Fit Index equals .92. Comparative Fit Index
(CFI) equals .68. Root Mean Squared Residual (RMR) equals 0.064
and Root Mean Square Error of Approximation equals 0.82. 3). The
normality of the Multiple Intelligences Measurement is categorized
into 3 levels. Those with high intelligence are those with percentiles
of more than 78. Those with moderate/medium intelligence are those
with percentiles between 24 and 77.9. Those with low intelligence
are those with percentiles from 23.9 downwards.
Abstract: Self-efficacy, self-reliance, and motivation were
examined in a quasi-experimental study with 178 sophomore
university students. Participants used an interactive cardiovascular
anatomy and physiology CD-ROM, and completed a 15-item
questionnaire. Reliability of the questionnaire was established using
Cronbach-s alpha. Post-tests and course grades were examined using
a t-test, demonstrating no significance. Results of an item-to-item
analysis of the questionnaire showed overall satisfaction with the
teaching methodology and varied results for self-efficacy, selfreliance,
and motivation. Kendall-s Tau was calculated for all items
in the questionnaire.
Abstract: The reliability of the tools developed to learn the
learning styles is essential to find out students- learning styles
trustworthily. For this purpose, the psychometric features of Grasha-
Riechman Student Learning Style Inventory developed by Grasha
was studied to contribute to this field. The study was carried out on
6th, 7th, and 8th graders of 10 primary education schools in Konya.
The inventory was applied twice with an interval of one month, and
according to the data of this application, the reliability coefficient
numbers of the 6 sub-dimensions pointed in the theory of the
inventory was found to be medium. Besides, it was found that the
inventory does not have a structure with 6 factors for both
Mathematics and English courses as represented in the theory.
Abstract: The paper discusses complexity of component-based
development (CBD) of embedded systems. Although CBD has its
merits, it must be augmented with methods to control the complexities
that arise due to resource constraints, timeliness, and run-time deployment
of components in embedded system development. Software
component specification, system-level testing, and run-time reliability
measurement are some ways to control the complexity.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: Dual bell nozzle is a promising one among the altitude
adaptation nozzle concepts, which offer increased nozzle
performance in rocket engines. Its advantage is the simplicity it offers
due to the absence of any additional mechanical device or movable
parts. Hence it offers reliability along with improved nozzle
performance as demanded by future launch vehicles. Among other
issues, the flow transition to the extension nozzle of a dual bell
nozzle is one of the major issues being studied in the development of
dual bell nozzle. A parameter named over-expansion factor, which
controls the value of the wall inflection angle, has been reported to
have substantial influence in this transition process. This paper
studies, through CFD and cold flow experiments, the effect of overexpansion
factor on flow transition in dual bell nozzles.
Abstract: The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Abstract: A method for solving linear and non-linear Goursat
problem is given by using the two-dimensional differential transform
method. The approximate solution of this problem is calculated in
the form of a series with easily computable terms and also the exact
solutions can be achieved by the known forms of the series solutions.
The method can easily be applied to many linear and non-linear
problems and is capable of reducing the size of computational work.
Several examples are given to demonstrate the reliability and the
performance of the presented method.
Abstract: Object: Review recent publications of patient safety
culture to investigate the relationship between leadership behavior,
safety culture, and safety performance in the healthcare industry.
Method: This study is a cross-sectional study, 350 questionnaires were
mailed to hospital workers with 195 valid responses obtained, and a
55.7% valid response rate. Confirmatory factor analysis (CFA) was
carried out to test the factor structure and determine if the composite
reliability was significant with a factor loading of >0.5, resulting in an
acceptable model fit. Results: Through the analysis of One-way
ANOVA, the results showed that physicians significantly have more
negative patient safety culture perceptions and safety performance
perceptions than non- physicians. Conclusions: The path analysis
results show that leadership behavior affects safety culture and safety
performance in the health care industry. Safety performance was
affected and improved with contingency leadership and a positive
patient safety organization culture. The study suggests improving
safety performance by providing a well-managed system that
includes: consideration of leadership, hospital worker training
courses, and a solid safety reporting system.
Abstract: The analytical prediction of the decay heat results
from the fast neutron fission of actinides was initiated under a project, 10-MAT1134-3, funded by king Abdulaziz City of Science
and Technology (KASCT), Long-Term Comprehensive National Plan for Science, Technology and Innovations, managed by a team
from King Abdulaziz University (KAU), Saudi Arabia, and
supervised by Argonne National Laboratory (ANL) has collaborated
with KAU's team to assist in the computational analysis. In this paper, the numerical solution of coupled linear differential equations
that describe the decays and buildups of minor fission product MFA, has been used to predict the total decay heat and its components from the fast neutron fission of 235U and 239Pu. The reliability of the present approach is illustrated via systematic
comparisons with the measurements reported by the University of
Tokyo, in YAYOI reactor.
Abstract: Nowadays, power systems, energy generation by wind
has been very important. Noting that the production of electrical
energy by wind turbines on site to several factors (such as wind speed
and profile site for the turbines, especially off the wind input speed,
wind rated speed and wind output speed disconnect) is dependent. On
the other hand, several different types of turbines in the market there.
Therefore, selecting a turbine that its capacity could also answer the
need for electric consumers the efficiency is high something is
important and necessary. In this context, calculating the amount of
wind power to help optimize overall network, system operation, in
determining the parameters of wind power is very important.
In this article, to help calculate the amount of wind power plant,
connected to the national network in the region Manjil wind,
selecting the best type of turbine and power delivery profile
appropriate to the network using Monte Carlo method has been.
In this paper, wind speed data from the wind site in Manjil, as minute
and during the year has been. Necessary simulations based on
Random Numbers Simulation method and repeat, using the software
MATLAB and Excel has been done.