Abstract: With the implied volatility as an important factor in
financial decision-making, in particular in option pricing valuation,
and also the given fact that the pricing biases of Leland option pricing
models and the implied volatility structure for the options are related,
this study considers examining the implied adjusted volatility smile
patterns and term structures in the S&P/ASX 200 index options using
the different Leland option pricing models. The examination of the
implied adjusted volatility smiles and term structures in the
Australian index options market covers the global financial crisis in
the mid-2007. The implied adjusted volatility was found to escalate
approximately triple the rate prior the crisis.
Abstract: Selection of maize (Zea mays) hybrids with wide adaptability across diverse farming environments is important, prior to recommending them to achieve a high rate of hybrid adoption. Grain yield of 14 maize hybrids, tested in a randomized completeblock design with four replicates across 22 environments in Iran, was analyzed using site regression (SREG) stability model. The biplot technique facilitates a visual evaluation of superior genotypes, which is useful for cultivar recommendation and mega-environment identification. The objectives of this study were (i) identification of suitable hybrids with both high mean performance and high stability (ii) to determine mega-environments for maize production in Iran. Biplot analysis identifies two mega-environments in this study. The first mega-environments included KRM, KSH, MGN, DZF A, KRJ, DRB, DZF B, SHZ B, and KHM, where G10 hybrid was the best performing hybrid. The second mega-environment included ESF B, ESF A, and SHZ A, where G4 hybrid was the best hybrid. According to the ideal-hybrid biplot, G10 hybrid was better than all other hybrids, followed by the G1 and G3 hybrids. These hybrids were identified as best hybrids that have high grain yield and high yield stability. GGE biplot analysis provided a framework for identifying the target testing locations that discriminates genotypes that are high yielding and stable.
Abstract: PCCI engines can reduce NOx and PM emissions
simultaneously without sacrificing thermal efficiency, but a low
combustion temperature resulting from early fuel injection, and
ignition occurring prior to TDC, can cause higher THC and CO
emissions and fuel consumption. In conclusion, it was found that the
PCCI combustion achieved by the 2-stage injection strategy with
optimized calibration factors (e.g. EGR rate, injection pressure, swirl
ratio, intake pressure, injection timing) can reduce NOx and PM
emissions simultaneously. This research works are expected to
provide valuable information conducive to a development of an
innovative combustion engine that can fulfill upcoming stringent
emission standards.
Abstract: The traditional Failure Mode and Effects Analysis
(FMEA) uses Risk Priority Number (RPN) to evaluate the risk level
of a component or process. The RPN index is determined by
calculating the product of severity, occurrence and detection indexes.
The most critically debated disadvantage of this approach is that
various sets of these three indexes may produce an identical value of
RPN. This research paper seeks to address the drawbacks in
traditional FMEA and to propose a new approach to overcome these
shortcomings. The Risk Priority Code (RPC) is used to prioritize
failure modes, when two or more failure modes have the same RPN.
A new method is proposed to prioritize failure modes, when there is a
disagreement in ranking scale for severity, occurrence and detection.
An Analysis of Variance (ANOVA) is used to compare means of
RPN values. SPSS (Statistical Package for the Social Sciences)
statistical analysis package is used to analyze the data. The results
presented are based on two case studies. It is found that the proposed
new methodology/approach resolves the limitations of traditional
FMEA approach.
Abstract: According to the statistics, the prevalence of congenital hearing loss in Taiwan is approximately six thousandths; furthermore, one thousandths of infants have severe hearing impairment. Hearing ability during infancy has significant impact in the development of children-s oral expressions, language maturity, cognitive performance, education ability and social behaviors in the future. Although most children born with hearing impairment have sensorineural hearing loss, almost every child more or less still retains some residual hearing. If provided with a hearing aid or cochlear implant (a bionic ear) timely in addition to hearing speech training, even severely hearing-impaired children can still learn to talk. On the other hand, those who failed to be diagnosed and thus unable to begin hearing and speech rehabilitations on a timely manner might lose an important opportunity to live a complete and healthy life. Eventually, the lack of hearing and speaking ability will affect the development of both mental and physical functions, intelligence, and social adaptability. Not only will this problem result in an irreparable regret to the hearing-impaired child for the life time, but also create a heavy burden for the family and society. Therefore, it is necessary to establish a set of computer-assisted predictive model that can accurately detect and help diagnose newborn hearing loss so that early interventions can be provided timely to eliminate waste of medical resources. This study uses information from the neonatal database of the case hospital as the subjects, adopting two different analysis methods of using support vector machine (SVM) for model predictions and using logistic regression to conduct factor screening prior to model predictions in SVM to examine the results. The results indicate that prediction accuracy is as high as 96.43% when the factors are screened and selected through logistic regression. Hence, the model constructed in this study will have real help in clinical diagnosis for the physicians and actually beneficial to the early interventions of newborn hearing impairment.
Abstract: To investigate the possible correlation between peer aggression and peer victimization, 148 sixth-graders were asked to respond to the Reduced Aggression and Victimization Scales (RAVS). RAVS measures the frequency of reporting aggressive behaviors or of being victimized during the previous week prior to the survey. The scales are composed of six items each. Each point represents one instance of aggression or victimization. Specifically, the Pearson Product-Moment Correlation Coefficient (PMCC) was used to determine the correlations between the scores of the sixthgraders in the two scales, both in individual items and total scores. Positive correlations were established and correlations were significant at the 0.01 levels.
Abstract: This paper aims to develop an algorithm of finite
capacity material requirement planning (FCMRP) system for a multistage
assembly flow shop. The developed FCMRP system has two
main stages. The first stage is to allocate operations to the first and
second priority work centers and also determine the sequence of the
operations on each work center. The second stage is to determine the
optimal start time of each operation by using a linear programming
model. Real data from a factory is used to analyze and evaluate the
effectiveness of the proposed FCMRP system and also to guarantee a
practical solution to the user. There are five performance measures,
namely, the total tardiness, the number of tardy orders, the total
earliness, the number of early orders, and the average flow-time. The
proposed FCMRP system offers an adjustable solution which is a
compromised solution among the conflicting performance measures.
The user can adjust the weight of each performance measure to
obtain the desired performance. The result shows that the combination
of FCMRP NP3 and EDD outperforms other combinations
in term of overall performance index. The calculation time for the
proposed FCMRP system is about 10 minutes which is practical for
the planners of the factory.
Abstract: The amplitude response of infrared (IR) sensors
depends on the reflectance properties of the target. Therefore, in
order to use IR sensor for measuring distances accurately, prior
knowledge of the surface must be known. This paper describes the
Phong Illumination Model for determining the properties of a surface
and subsequently calculating the distance to the surface. The angular
position of the IR sensor is computed as normal to the surface for
simplifying the calculation. Ultrasonic (US) sensor can provide the
initial information on distance to obtain the parameters for this
method. In addition, the experimental results obtained by using
LabView are discussed. More care should be taken when placing the
objects from the sensors during acquiring data since the small change
in angle could show very different distance than the actual one.
Since stereo camera vision systems do not perform well under some
environmental conditions such as plain wall, glass surfaces, or poor
lighting conditions, the IR and US sensors can be used additionally to
improve the overall vision systems of mobile robots.
Abstract: Wrist pulse analysis for identification of health status
is found in Ancient Indian as well as Chinese literature. The preprocessing
of wrist pulse is necessary to remove outlier pulses and
fluctuations prior to the analysis of pulse pressure signal. This paper
discusses the identification of irregular pulses present in the pulse
series and intricacies associated with the extraction of time domain
pulse features. An approach of Dynamic Time Warping (DTW) has
been utilized for the identification of outlier pulses in the wrist pulse
series. The ambiguity present in the identification of pulse features is
resolved with the help of first derivative of Ensemble Average of
wrist pulse series. An algorithm for detecting tidal and dicrotic notch
in individual wrist pulse segment is proposed.
Abstract: Construction site safety in China has aroused
comprehensive concern all over the world. It is imperative to
investigate the main causes of poor construction site safety. This paper
divides all the causes into four aspects, namely the factors of workers,
object, environment and management and sets up the accident causes
element system based on Delphi Method. This is followed by the
application of structural equation modeling to examine the importance
of each aspect of causes from the standpoints of different roles related
to the construction respectively. The results indicate that all the four
aspects of factors are in need of improvement, and different roles have
different ideas considering the priority of those factors. The paper has
instructive significance for the practitioners to take measures to
improve construction site safety in China accordingly.
Abstract: In recent years Malaysia has included renewable
energy as an alternative fuel to help in diversifying the country-s
energy reliance on oil, natural gas, coal and hydropower with
biomass and solar energy gaining priority. The scope of this paper is
to look at the designing procedures and analysis of a solar thermal
parabolic trough concentrator by simulation utilizing meteorological
data in several parts of Malaysia. Parameters which include the
aperture area, the diameter of the receiver and the working fluid may
be varied to optimize the design. Aperture area is determined by
considering the width and the length of the concentrator whereas the
geometric concentration ratio (CR) is obtained by considering the
width and diameter of the receiver. Three types of working fluid are
investigated. Theoretically, concentration ratios can be very high in
the range of 10 to 40 000 depending on the optical elements used and
continuous tracking of the sun. However, a thorough analysis is
essential as discussed in this paper where optical precision and
thermal analysis must be carried out to evaluate the performance of
the parabolic trough concentrator as the theoretical CR is not the only
factor that should be considered.
Abstract: The mixture formation prior to the ignition process
plays as a key element in the diesel combustion. Parametric studies of
mixture formation and ignition process in various injection parameter
has received considerable attention in potential for reducing
emissions. Purpose of this study is to clarify the effects of injection
pressure on mixture formation and ignition especially during ignition
delay period, which have to be significantly influences throughout the
combustion process and exhaust emissions. This study investigated
the effects of injection pressure on diesel combustion fundamentally
using rapid compression machine. The detail behavior of mixture
formation during ignition delay period was investigated using the
schlieren photography system with a high speed camera. This method
can capture spray evaporation, spray interference, mixture formation
and flame development clearly with real images. Ignition process and
flame development were investigated by direct photography method
using a light sensitive high-speed color digital video camera. The
injection pressure and air motion are important variable that strongly
affect to the fuel evaporation, endothermic and prolysis process
during ignition delay. An increased injection pressure makes spray tip
penetration longer and promotes a greater amount of fuel-air mixing
occurs during ignition delay. A greater quantity of fuel prepared
during ignition delay period thus predominantly promotes more rapid
heat release.
Abstract: The request for a sustainable development challenges
both managers and consumers to rethink habitual practices and
activities. While consumers are challenged to develop sustainable
consumption patterns, companies are asked to establish managerial
systems and structures considering economical, ecological, and social
issues. As this is in particular true for housing associations, this paper
aims first, at providing an understanding of sustainability strategy in
residential trade and industry (RTI) by identifying relevant facets of
this construct and second, at conceptually analyzing the impact of
sustainability strategy in RTI on operational efficiency and
performance of municipal housing companies. The author develops a
model of sustainability strategy in RTI and its effects and further,
sheds light in priorities for future research.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: The shortest path routing problem is a multiobjective
nonlinear optimization problem with constraints. This problem has
been addressed by considering Quality of service parameters, delay
and cost objectives separately or as a weighted sum of both
objectives. Multiobjective evolutionary algorithms can find multiple
pareto-optimal solutions in one single run and this ability makes them
attractive for solving problems with multiple and conflicting
objectives. This paper uses an elitist multiobjective evolutionary
algorithm based on the Non-dominated Sorting Genetic Algorithm
(NSGA), for solving the dynamic shortest path routing problem in
computer networks. A priority-based encoding scheme is proposed
for population initialization. Elitism ensures that the best solution
does not deteriorate in the next generations. Results for a sample test
network have been presented to demonstrate the capabilities of the
proposed approach to generate well-distributed pareto-optimal
solutions of dynamic routing problem in one single run. The results
obtained by NSGA are compared with single objective weighting
factor method for which Genetic Algorithm (GA) was applied.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: In this paper the problem of estimating the time delay
between two spatially separated noisy sinusoidal signals by system
identification modeling is addressed. The system is assumed to be
perturbed by both input and output additive white Gaussian noise. The
presence of input noise introduces bias in the time delay estimates.
Normally the solution requires a priori knowledge of the input-output
noise variance ratio. We utilize the cascade of a self-tuned filter with
the time delay estimator, thus making the delay estimates robust to
input noise. Simulation results are presented to confirm the superiority
of the proposed approach at low input signal-to-noise ratios.
Abstract: The recovery of metal values and safe disposal of
spent catalyst is gaining interest due to both its hazardous nature and
increased regulation associated with disposal methods. Prior to the
recovery of the valuable metals, removal of entrained deposits limit
the diffusion of lixiviate resulting in low recovery of metals must be
taken into consideration. Therefore, petroleum refinery spent catalyst
was subjected to acetone washing and roasting at 500oC. The treated
samples were investigated for metals bioleaching using
Acidithiobacillus ferrooxidans in batch reactors and the leaching
efficiencies were compared. It was found out that acetone washed
spent catalysts results in better metal recovery compare to roasted
spent. About 83% Ni, 20% Al, 50% Mo and 73% V were leached
using the acetone washed spent catalyst. In both the cases, Ni, V and
Mo was high compared to Al.
Abstract: IEEE 802.11e is the enhanced version of the IEEE
802.11 MAC dedicated to provide Quality of Service of wireless
network. It supports QoS by the service differentiation and
prioritization mechanism. Data traffic receives different priority
based on QoS requirements. Fundamentally, applications are divided
into four Access Categories (AC). Each AC has its own buffer queue
and behaves as an independent backoff entity. Every frame with a
specific priority of data traffic is assigned to one of these access
categories. IEEE 802.11e EDCA (Enhanced Distributed Channel
Access) is designed to enhance the IEEE 802.11 DCF (Distributed
Coordination Function) mechanisms by providing a distributed
access method that can support service differentiation among
different classes of traffic. Performance of IEEE 802.11e MAC layer
with different ACs is evaluated to understand the actual benefits
deriving from the MAC enhancements.
Abstract: The objective of this study is to introduce estimators to the parameters and survival function for Weibull distribution using three different methods, Maximum Likelihood estimation, Standard Bayes estimation and Modified Bayes estimation. We will then compared the three methods using simulation study to find the best one base on MPE and MSE.