Abstract: The traditional Failure Mode and Effects Analysis
(FMEA) uses Risk Priority Number (RPN) to evaluate the risk level
of a component or process. The RPN index is determined by
calculating the product of severity, occurrence and detection indexes.
The most critically debated disadvantage of this approach is that
various sets of these three indexes may produce an identical value of
RPN. This research paper seeks to address the drawbacks in
traditional FMEA and to propose a new approach to overcome these
shortcomings. The Risk Priority Code (RPC) is used to prioritize
failure modes, when two or more failure modes have the same RPN.
A new method is proposed to prioritize failure modes, when there is a
disagreement in ranking scale for severity, occurrence and detection.
An Analysis of Variance (ANOVA) is used to compare means of
RPN values. SPSS (Statistical Package for the Social Sciences)
statistical analysis package is used to analyze the data. The results
presented are based on two case studies. It is found that the proposed
new methodology/approach resolves the limitations of traditional
FMEA approach.
Abstract: Complex networks have been intensively studied across
many fields, especially in Internet technology, biological engineering,
and nonlinear science. Software is built up out of many interacting
components at various levels of granularity, such as functions, classes,
and packages, representing another important class of complex networks.
It can also be studied using complex network theory. Over the
last decade, many papers on the interdisciplinary research between
software engineering and complex networks have been published.
It provides a different dimension to our understanding of software
and also is very useful for the design and development of software
systems. This paper will explore how to use the complex network
theory to analyze software structure, and briefly review the main
advances in corresponding aspects.
Abstract: Existing literature ondesign reasoning seems to give
either one sided accounts on expert design behaviour based on
internal processing. In the same way ecological theoriesseem to
focus one sidedly on external elementsthat result in a lack of unifying
design cognition theory. Although current extended design cognition
studies acknowledge the intellectual interaction between internal and
external resources, there still seems to be insufficient understanding
of the complexities involved in such interactive processes. As
such,this paper proposes a novelmulti-directional model for design
researchers tomap the complex and dynamic conduct controlling
behaviour in which both the computational and ecological
perspectives are integrated in a vertical manner. A clear distinction
between identified intentional and emerging physical drivers, and
relationships between them during the early phases of experts- design
process, is demonstrated by presenting a case study in which the
model was employed.
Abstract: The RR interval series is non-stationary and unevenly
spaced in time. For estimating its power spectral density (PSD) using
traditional techniques like FFT, require resampling at uniform
intervals. The researchers have used different interpolation
techniques as resampling methods. All these resampling methods
introduce the low pass filtering effect in the power spectrum. The
lomb transform is a means of obtaining PSD estimates directly from
irregularly sampled RR interval series, thus avoiding resampling. In
this work, the superiority of Lomb transform method has been
established over FFT based approach, after applying linear and
cubicspline interpolation as resampling methods, in terms of
reproduction of exact frequency locations as well as the relative
magnitudes of each spectral component.
Abstract: Building intelligent traffic guide systems has been an
interesting subject recently. A good system should be able to observe
all important visual information to be able to analyze the context of
the scene. To do so, signs in general, and traffic signs in particular,
are usually taken into account as they contain rich information to
these systems. Therefore, many researchers have put an effort on
sign recognition field. Sign localization or sign detection is the most
important step in the sign recognition process. This step filters out
non informative area in the scene, and locates candidates in later
steps. In this paper, we apply a new approach in detecting sign
locations using a new color invariant model. Experiments are carried
out with different datasets introduced in other works where authors
claimed the difficulty in detecting signs under unfavorable imaging
conditions. Our method is simple, fast and most importantly it gives
a high detection rate in locating signs.
Abstract: Electrical Discharge Machine (EDM) is especially
used for the manufacturing of 3-D complex geometry and hard
material parts that are extremely difficult-to-machine by conventional
machining processes. In this paper authors review the research work
carried out in the development of die-sinking EDM within the past
decades for the improvement of machining characteristics such as
Material Removal Rate, Surface Roughness and Tool Wear Ratio. In
this review various techniques reported by EDM researchers for
improving the machining characteristics have been categorized as
process parameters optimization, multi spark technique, powder
mixed EDM, servo control system and pulse discriminating. At the
end, flexible machine controller is suggested for Die Sinking EDM to
enhance the machining characteristics and to achieve high-level
automation. Thus, die sinking EDM can be integrated with Computer
Integrated Manufacturing environment as a need of agile
manufacturing systems.
Abstract: Fip-gts, an immunomodulatory protein purified from Ganoderma tsugae, has been reported to possess therapeutic effects in the treatment of cancer and autoimmune disease. For medicinal application, a recombinant Fip-gts was successfully expressed and purified in Sf21 insect cells by our previously work. It is important to evaluate the immunomodulatory activity of the rFip-gts. To assess the immunomodulatory potential of rFip-gts, the T lymphocytes of murine splenocytes were used in the present study. Results revealed that rFip-gts induced cellular aggregation formation. Additionally, the expression of IL-2 and IFN-r were up-regulated after the treatment of rFip-gts, and a corresponding increased production of IL-2 and IFN-r in a dose-dependent manner. The results showed that rFip-gts has an immunomodulatory activity in inducing Th1 lymphocytes from murine splenocytes released IL-2 and IFN-γ, thus suggest that rFip-gts may have therapeutic potential in vivo as an immune modulator.
Abstract: This research aims to examine the key success factors
for the diffusion of mobile entertainment services in Malaysia. The
drivers and barriers observed in this research include perceived
benefit; concerns pertaining to pricing, product and technological
standardization, privacy and security; as well as influences from
peers and community. An analysis of a Malaysian survey of 384
respondents between 18 to 25 years shows that subscribers placed
greater importance on perceived benefit of mobile entertainment
services compared to other factors. Results of the survey also show
that there are strong positive correlations between all the factors,
with pricing issue–perceived benefit showing the strongest
relationship. This paper aims to provide an extensive study on the
drivers and barriers that could be used to derive architecture for
entertainment service provision to serve as a guide for telcos to
outline suitable approaches in order to encourage mass market
adoption of mobile entertainment services in Malaysia.
Abstract: This study was conducted Ismailoglu grape type (Vitis
vinifera L.) and its vine which was aged 15 was grown on its own
root in a vegetation period of 2013 in Nevşehir province in Turkey.
In this research, it was investigated whether the applications of
Control (C), 1/3 cluster tip reduction (1/3 CTR), shoot tip reduction
(STR), 1/3 CTR + STR, TKI-HUMAS (TKI-HM) (Soil) (S), TKIHM
(Foliar) (F), TKI-HM (S + F), 1/3 CTR + TKI-HM (S), 1/3 CTR
+ TKI-HM (F), 1/3 CTR + TKI-HM (S+F), STR + TKI-HM (S), STR
+ TKI-HM (F), STR + TKI-HM (S + F), 1/3 CTR + STR+TKI-HM
(S), 1/3 CTR + STR + TKI-HM (F), 1/3 CTR + STR + TKI-HM (S +
F) on yield and yield components of Ismailoglu grape type. The
results were obtained as the highest fresh grape yield (16.15 kg/vine)
with TKI-HM (S), as the highest cluster weight (652.39 g) with 1/3
CTR + STR, as the highest 100 berry weight (419.07 g) with 1/3
CTR + STR + TKI-HM (F), as the highest maturity index (44.06)
with 1/3 CTR, as the highest must yield (810.00 ml) with STR +
TKI-HM (F), as the highest intensity of L* color (42.04) with TKIHM
(S + F), as the highest intensity of a* color (2.60) with 1/3 CTR
+ TKI-HM (S), as the highest intensity of b* color (7.16) with 1/3
CTR + TKI-HM (S) applications. To increase the fresh grape yield of
Ismailoglu grape type can be recommended TKI-HM (S) application.
Abstract: Research on damage of gears and gear pairs using
vibration signals remains very attractive, because vibration signals
from a gear pair are complex in nature and not easy to interpret.
Predicting gear pair defects by analyzing changes in vibration signal
of gears pairs in operation is a very reliable method. Therefore, a
suitable vibration signal processing technique is necessary to extract
defect information generally obscured by the noise from dynamic
factors of other gear pairs.This article presents the value of cepstrum
analysis in vehicle gearbox fault diagnosis. Cepstrum represents the
overall power content of a whole family of harmonics and sidebands
when more than one family of sidebands is present at the same time.
The concept for the measurement and analysis involved in using the
technique are briefly outlined. Cepstrum analysis is used for detection
of an artificial pitting defect in a vehicle gearbox loaded with
different speeds and torques. The test stand is equipped with three
dynamometers; the input dynamometer serves asthe internal
combustion engine, the output dynamometers introduce the load on
the flanges of the output joint shafts. The pitting defect is
manufactured on the tooth side of a gear of the fifth speed on the
secondary shaft. Also, a method for fault diagnosis of gear faults is
presented based on order Cepstrum. The procedure is illustrated with
the experimental vibration data of the vehicle gearbox. The results
show the effectiveness of Cepstrum analysis in detection and
diagnosis of the gear condition.
Abstract: In 3D-wavelet video coding framework temporal
filtering is done along the trajectory of motion using Motion
Compensated Temporal Filtering (MCTF). Hence computationally
efficient motion estimation technique is the need of MCTF. In this
paper a predictive technique is proposed in order to reduce the
computational complexity of the MCTF framework, by exploiting
the high correlation among the frames in a Group Of Picture (GOP).
The proposed technique applies coarse and fine searches of any fast
block based motion estimation, only to the first pair of frames in a
GOP. The generated motion vectors are supplied to the next
consecutive frames, even to subsequent temporal levels and only fine
search is carried out around those predicted motion vectors. Hence
coarse search is skipped for all the motion estimation in a GOP
except for the first pair of frames. The technique has been tested for
different fast block based motion estimation algorithms over different
standard test sequences using MC-EZBC, a state-of-the-art scalable
video coder. The simulation result reveals substantial reduction (i.e.
20.75% to 38.24%) in the number of search points during motion
estimation, without compromising the quality of the reconstructed
video compared to non-predictive techniques. Since the motion
vectors of all the pair of frames in a GOP except the first pair will
have value ±1 around the motion vectors of the previous pair of
frames, the number of bits required for motion vectors is also
reduced by 50%.
Abstract: The incidence of mechanical fracture of an
automobile piston rings prompted development of fracture analysis
method on this case. The three rings (two compression rings and one
oil ring) were smashed into several parts during the power-test (after
manufacturing the engine) causing piston and liner to be damaged.
The radial and oblique cracking happened on the failed piston rings.
The aim of the fracture mechanics simulations presented in this paper
was the calculation of particular effective fracture mechanics
parameters, such as J-integrals and stress intensity factors. Crack
propagation angles were calculated as well. Two-dimensional
fracture analysis of the first compression ring has been developed in
this paper using ABAQUS CAE6.5-1 software. Moreover, SEM
fractography was developed on fracture surfaces and is discussed in
this paper. Results of numerical calculations constitute the basis for
further research on real object.
Abstract: In this paper, a Biochemical Methane Potential (BMP)
test provides a measure of the energy production potential from codigestion
between the frozen seafood wastewater and the decanter
cake. The experiments were conducted in laboratory-scale. The
suitable ratio of the frozen seafood wastewater and the decanter cake
was observed in the BMP test. The ratio of the co-digestion between
the frozen seafood wastewater and the decanter cake has impacts on
the biogas production and energy production potential. The best
performance for energy production potential using BMP test
observed from the 180 ml of the frozen seafood wastewater and 10 g
of the decanter cake ratio. This ratio provided the maximum methane
production at 0.351 l CH4/g TCODremoval. The removal efficiencies
are 76.18%, 83.55%, 43.16% and 56.76% at TCOD, SCOD, TS and
VS, respectively. The result can be concluded that the decanter cake
can improve the energy production potential of the frozen seafood
wastewater. The energy provides from co-digestion between frozen
seafood wastewater and decanter cake approximately 19x109
MJ/year in Thailand.
Abstract: Self-directed learning (SDL) was developed initially
for adult learning. Guglielmino constructed a scale to measure SDL.
Recent researchers have applied this concept to children. Although
there are sufficient theoretical evidences to present the possibility of
applying this concept to children, empirical evidences were not
provided. This study aimed to examine the quality of SDL and
construct a scale to measure SDL among young children. A modified
scale of Guglielmino-s scale was constructed and piloted with 183
subjects of age 9. Findings suggest that the qualities of SDL in young
ages are apparently congruent with that of adults.
Abstract: Researches related to standard product model and
development of neutral manufacturing interfaces for numerical
control machines becomes a significant topic since the last 25 years.
In this paper, a detail description of STEP implementation on turnmill
manufacturing has been discussed. It shows requirements of
information contents from ISO14649 data model. It covers to
describe the design of STEP-NC framework applicable to turn-mill
manufacturing. In the framework, EXPRESS-G and UML modeling
tools are used to depict the information contents of the system and
established the bases of information model requirement. A product
and manufacturing data model applicable for STEP compliant
manufacturing. The next generation turn-mill operations
requirements have been represented by a UML diagram. An object
oriented classes of ISO1449 has been developed on Visual Basic dot
NET platform for binding the static information model represented
by the UML diagram. An architect of the proposed system
implementation has been given on the bases of the design and
manufacturing module of STEP-NC interface established. Finally, a
part 21 file process plan generated for an illustration of turn-mill
components.
Abstract: The study was designed to develop a measurement of
the positive emotion regulation questionnaire (PERQ) that assesses
positive emotion regulation strategies through self-report. The 14
items developed for the surveying instrument of the study were based
upon literatures regarding elements of positive regulation strategies.
319 elementary students (age ranging from 12 to14) were recruited
among three public elementary schools to survey on their use of
positive emotion regulation strategies. Of 319 subjects, 20 invalid
questionnaire s yielded a response rate of 92%. The data collected
wasanalyzed through methods such as item analysis, factor analysis,
and structural equation models. In reference to the results from item
analysis, the formal survey instrument was reduced to 11 items. A
principal axis factor analysis with varimax was performed on
responses, resulting in a 2-factor equation (savoring strategy and
neutralizing strategy), which accounted for 55.5% of the total
variance. Then, the two-factor structure of scale was also identified by
structural equation models. Finally, the reliability coefficients of the
two factors were Cronbach-s α .92 and .74. Gender difference was
only found in savoring strategy. In conclusion, the positive emotion
regulation strategies questionnaire offers a brief, internally consistent,
and valid self-report measure for understanding the emotional
regulation strategies of children that may be useful to researchers and
applied professionals.
Abstract: School brawls have taken casualties to the life of
students in Jakarta. In the last time, school brawl studies investigate
the cause with groups approach such as cognitive dissonance that
provocation and resentment among student in the schools. This
research focus on individual factors as the cause of school brawls,
where the characteristics of children with ADHD, lack of self-control
regulation, and level of depression. The results show that in fact the
lower influence of individual factor to be come conduct disorder. The
meaning students have good self-regulation control, insignificant
characteristics of children with ADHD, and moderate of depression
level. Concluded group factor more significant than individual factor
to caused school brawl.
Abstract: This paper presents the simulation of fragmentation
warhead using a hydrocode, Autodyn. The goal of this research is to
determine the lethal range of such a warhead. This study investigates
the lethal range of warheads with and without steel balls as
preformed fragments. The results from the FE simulation, i.e. initial
velocities and ejected spray angles of fragments, are further processed
using an analytical approach so as to determine a fragment hit density
and probability of kill of a modelled warhead. In order to simulate a
plenty of preformed fragments inside a warhead, the model requires
expensive computation resources. Therefore, this study attempts to
model the problem in an alternative approach by considering an
equivalent mass of preformed fragments to the mass of warhead
casing. This approach yields approximately 7% and 20% difference
of fragment velocities from the analytical results for one and two
layers of preformed fragments, respectively. The lethal ranges of the
simulated warheads are 42.6 m and 56.5 m for warheads with one and
two layers of preformed fragments, respectively, compared to 13.85
m for a warhead without preformed fragment. These lethal ranges are
based on the requirement of fragment hit density. The lethal ranges
which are based on the probability of kill are 27.5 m, 61 m and 70 m
for warheads with no preformed fragment, one and two layers of
preformed fragments, respectively.
Abstract: This paper presents a new feature based dense stereo
matching algorithm to obtain the dense disparity map via dynamic
programming. After extraction of some proper features, we use some
matching constraints such as epipolar line, disparity limit, ordering
and limit of directional derivative of disparity as well. Also, a coarseto-
fine multiresolution strategy is used to decrease the search space
and therefore increase the accuracy and processing speed. The
proposed method links the detected feature points into the chains and
compares some of the feature points from different chains, to
increase the matching speed. We also employ color stereo matching
to increase the accuracy of the algorithm. Then after feature
matching, we use the dynamic programming to obtain the dense
disparity map. It differs from the classical DP methods in the stereo
vision, since it employs sparse disparity map obtained from the
feature based matching stage. The DP is also performed further on a
scan line, between any matched two feature points on that scan line.
Thus our algorithm is truly an optimization method. Our algorithm
offers a good trade off in terms of accuracy and computational
efficiency. Regarding the results of our experiments, the proposed
algorithm increases the accuracy from 20 to 70%, and reduces the
running time of the algorithm almost 70%.
Abstract: Intelligent technologies are increasingly facilitating
sustainable water management strategies in Australia. While this
innovation can present clear cost benefits to utilities through
immediate leak detection and deference of capital costs, the impact of
this technology on households is less distinct. By offering real-time
engagement and detailed end-use consumption breakdowns, there is
significant potential for demand reduction as a behavioural response
to increased information. Despite this potential, passive
implementation without well-planned residential engagement
strategies is likely to result in a lost opportunity. This paper begins
this research process by exploring the effect of smart water meters
through the lens of three behaviour change theories. The Theory of
Planned Behaviour (TPB), Belief Revision theory (BR) and Practice
Theory emphasise different variables that can potentially influence
and predict household water engagements. In acknowledging the
strengths of each theory, the nuances and complexity of household
water engagement can be recognised which can contribute to
effective planning for residential smart meter engagement strategies.