Abstract: A model is presented to find the optimal design of the
mixed renewable warranty policy for non-repairable Weibull life
products. The optimal design considers the conflict of interests
between the customer and the manufacturer: the customer interests
are longer full rebate coverage period and longer total warranty
coverage period, the manufacturer interests are lower warranty cost
and lower risk. The design factors are full rebate and total warranty
coverage periods. Results showed that mixed policy is better than full
rebate policy in terms of risk and total warranty coverage period in all
of the three bathtub regions. In addition, results showed that linear
policy is better than mixed policy in infant mortality and constant
failure regions while the mixed policy is better than linear policy in
ageing region of the model. Furthermore, the results showed that
using burn-in period for infant mortality products reduces warranty
cost and risk.
Abstract: We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.
Abstract: Cryptographic algorithms play a crucial role in the
information society by providing protection from unauthorized
access to sensitive data. It is clear that information technology will
become increasingly pervasive, Hence we can expect the emergence
of ubiquitous or pervasive computing, ambient intelligence. These
new environments and applications will present new security
challenges, and there is no doubt that cryptographic algorithms and
protocols will form a part of the solution. The efficiency of a public
key cryptosystem is mainly measured in computational overheads,
key size and bandwidth. In particular the RSA algorithm is used in
many applications for providing the security. Although the security
of RSA is beyond doubt, the evolution in computing power has
caused a growth in the necessary key length. The fact that most chips
on smart cards can-t process key extending 1024 bit shows that there
is need for alternative. NTRU is such an alternative and it is a
collection of mathematical algorithm based on manipulating lists of
very small integers and polynomials. This allows NTRU to high
speeds with the use of minimal computing power. NTRU (Nth degree
Truncated Polynomial Ring Unit) is the first secure public key
cryptosystem not based on factorization or discrete logarithm
problem. This means that given sufficient computational resources
and time, an adversary, should not be able to break the key. The
multi-party communication and requirement of optimal resource
utilization necessitated the need for the present day demand of
applications that need security enforcement technique .and can be
enhanced with high-end computing. This has promoted us to develop
high-performance NTRU schemes using approaches such as the use
of high-end computing hardware. Peer-to-peer (P2P) or enterprise
grids are proven as one of the approaches for developing high-end
computing systems. By utilizing them one can improve the
performance of NTRU through parallel execution. In this paper we
propose and develop an application for NTRU using enterprise grid
middleware called Alchemi. An analysis and comparison of its
performance for various text files is presented.
Abstract: Wavelet transform has been extensively used in
machine fault diagnosis and prognosis owing to its strength to deal
with non-stationary signals. The existing Wavelet transform based
schemes for fault diagnosis employ wavelet decomposition of the
entire vibration frequency which not only involve huge
computational overhead in extracting the features but also increases
the dimensionality of the feature vector. This increase in the
dimensionality has the tendency to 'over-fit' the training data and
could mislead the fault diagnostic model. In this paper a novel
technique, envelope wavelet packet transform (EWPT) is proposed in
which features are extracted based on wavelet packet transform of the
filtered envelope signal rather than the overall vibration signal. It not
only reduces the computational overhead in terms of reduced number
of wavelet decomposition levels and features but also improves the
fault detection accuracy. Analytical expressions are provided for the
optimal frequency resolution and decomposition level selection in
EWPT. Experimental results with both actual and simulated machine
fault data demonstrate significant gain in fault detection ability by
EWPT at reduced complexity compared to existing techniques.
Abstract: Interactive push VOD system is a new kind of system
that incorporates push technology and interactive technique. It can
push movies to users at high speeds at off-peak hours for optimal
network usage so as to save bandwidth. This paper presents effective
software-based solution for processing mass downstream data at
terminals of interactive push VOD system, where the service can
download movie according to a viewer-s selection. The downstream
data is divided into two catalogs: (1) the carousel data delivered
according to DSM-CC protocol; (2) IP data delivered according to
Euro-DOCSIS protocol. In order to accelerate download speed and
reduce data loss rate at terminals, this software strategy introduces
caching, multi-thread and resuming mechanisms. The experiments
demonstrate advantages of the software-based solution.
Abstract: In this paper, we first give the representation of the general solution of the following least-squares problem (LSP): Given matrices X ∈ Rn×p, B ∈ Rp×p and A0 ∈ Rr×r, find a matrix A ∈ Rn×n such that XT AX − B = min, s. t. A([1, r]) = A0, where A([1, r]) is the r×r leading principal submatrix of the matrix A. We then consider a best approximation problem: given an n × n matrix A˜ with A˜([1, r]) = A0, find Aˆ ∈ SE such that A˜ − Aˆ = minA∈SE A˜ − A, where SE is the solution set of LSP. We show that the best approximation solution Aˆ is unique and derive an explicit formula for it. Keyw
Abstract: Research into the problem of classification of sonar signals has been taken up as a challenging task for the neural networks. This paper investigates the design of an optimal classifier using a Multi layer Perceptron Neural Network (MLP NN) and Support Vector Machines (SVM). Results obtained using sonar data sets suggest that SVM classifier perform well in comparison with well-known MLP NN classifier. An average classification accuracy of 91.974% is achieved with SVM classifier and 90.3609% with MLP NN classifier, on the test instances. The area under the Receiver Operating Characteristics (ROC) curve for the proposed SVM classifier on test data set is found as 0.981183, which is very close to unity and this clearly confirms the excellent quality of the proposed classifier. The SVM classifier employed in this paper is implemented using kernel Adatron algorithm is seen to be robust and relatively insensitive to the parameter initialization in comparison to MLP NN.
Abstract: We consider optimal channel equalization for MIMO
(multi-input/multi-output) time-varying channels in the sense of
MMSE (minimum mean-squared-error), where the observation noise
can be non-stationary. We show that all ZF (zero-forcing) receivers
can be parameterized in an affine form which eliminates completely
the ISI (inter-symbol-interference), and optimal channel equalizers
can be designed through minimization of the MSE (mean-squarederror)
between the detected signals and the transmitted signals,
among all ZF receivers. We demonstrate that the optimal channel
equalizer is a modified Kalman filter, and show that under the AWGN
(additive white Gaussian noise) assumption, the proposed optimal
channel equalizer minimizes the BER (bit error rate) among all
possible ZF receivers. Our results are applicable to optimal channel
equalization for DWMT (discrete wavelet multitone), multirate transmultiplexers,
OFDM (orthogonal frequency division multiplexing),
and DS (direct sequence) CDMA (code division multiple access)
wireless data communication systems. A design algorithm for optimal
channel equalization is developed, and several simulation examples
are worked out to illustrate the proposed design algorithm.
Abstract: An enhanced particle swarm optimization algorithm
(PSO) is presented in this work to solve the non-convex OPF
problem that has both discrete and continuous optimization variables.
The objective functions considered are the conventional quadratic
function and the augmented quadratic function. The latter model
presents non-differentiable and non-convex regions that challenge
most gradient-based optimization algorithms. The optimization
variables to be optimized are the generator real power outputs and
voltage magnitudes, discrete transformer tap settings, and discrete
reactive power injections due to capacitor banks. The set of equality
constraints taken into account are the power flow equations while the
inequality ones are the limits of the real and reactive power of the
generators, voltage magnitude at each bus, transformer tap settings,
and capacitor banks reactive power injections. The proposed
algorithm combines PSO with Newton-Raphson algorithm to
minimize the fuel cost function. The IEEE 30-bus system with six
generating units is used to test the proposed algorithm. Several cases
were investigated to test and validate the consistency of detecting
optimal or near optimal solution for each objective. Results are
compared to solutions obtained using sequential quadratic
programming and Genetic Algorithms.
Abstract: The application of a Static Synchronous Series Compensator (SSSC) controller to improve the transient stability performance of a power system is thoroughly investigated in this paper. The design problem of SSSC controller is formulated as an optimization problem and Particle Swarm Optimization (PSO) Technique is employed to search for optimal controller parameters. By minimizing the time-domain based objective function, in which the deviation in the oscillatory rotor angle of the generator is involved; transient stability performance of the system is improved. The proposed controller is tested on a weakly connected power system subjected to different severe disturbances. The non-linear simulation results are presented to show the effectiveness of the proposed controller and its ability to provide efficient damping of low frequency oscillations. It is also observed that the proposed SSSC controller improves greatly the voltage profile of the system under severe disturbances.
Abstract: Since the actuator capacity is limited, in the real
application of active control systems under sever earthquakes it is
conceivable that the actuators saturate, hence the actuator saturation
should be considered as a constraint in design of optimal controllers.
In this paper optimal design of active controllers for nonlinear
structures by considering actuator saturation, has been studied. The
proposed method for designing optimal controllers is based on
defining an optimization problem which the objective has been to
minimize the maximum displacement of structure when a limited
capacity for actuator has been used. To this end a single degree of
freedom (SDF) structure with a bilinear hysteretic behavior has been
simulated under a white noise ground acceleration of different
amplitudes. Active tendon control mechanism, comprised of prestressed
tendons and an actuator, and extended nonlinear Newmark
method based instantaneous optimal control algorithm have been
used. To achieve the best results, the weights corresponding to
displacement, velocity, acceleration and control force in the
performance index have been optimized by the Distributed Genetic
Algorithm (DGA). Results show the effectiveness of the proposed
method in considering actuator saturation. Also based on the
numerical simulations it can be concluded that the actuator capacity
and the average value of required control force are two important
factors in designing nonlinear controllers which consider the actuator
saturation.
Abstract: This paper describes various stages of design and prototyping of a modular robot for use in various industrial applications. The major goal of current research has been to design and make different robotic joints at low cost capable of being assembled together in any given order for achieving various robot configurations. Five different types of joins were designed and manufactured where extensive research has been carried out on the design of each joint in order to achieve optimal strength, size, modularity, and price. This paper presents various stages of research and development undertaken to engineer these joints that include material selection, manufacturing, and strength analysis. The outcome of this research addresses the birth of a new generation of modular industrial robots with a wider range of applications and greater efficiency.
Abstract: The paper is concerned with developing stochastic delay mechanisms for efficient multicast protocols and for smooth mobile handover processes which are capable of preserving a given Quality of Service (QoS). In both applications the participating entities (receiver nodes or subscribers) sample a stochastic timer and generate load after a random delay. In this way, the load on the networking resources is evenly distributed which helps to maintain QoS communication. The optimal timer distributions have been sought in different p.d.f. families (e.g. exponential, power law and radial basis function) and the optimal parameter have been found in a recursive manner. Detailed simulations have demonstrated the improvement in performance both in the case of multicast and mobile handover applications.
Abstract: Breastfeeding is an important concept in the maternal life of a woman. In this paper, we focus on exclusive breastfeeding. Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. This type of breastfeeding is very important during the first six months because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in Mauritius, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we give an overview of exclusive breastfeeding in Mauritius and the factors influencing it. We further analyze the local practices of exclusive breastfeeding using the Generalized Poisson regression model and the negative-binomial model since the data are over-dispersed.
Abstract: The objective of this paper is to propose an adaptive multi threshold for image segmentation precisely in object detection. Due to the different types of license plates being used, the requirement of an automatic LPR is rather different for each country. The proposed technique is applied on Malaysian LPR application. It is based on Multi Layer Perceptron trained by back propagation. The proposed adaptive threshold is introduced to find the optimum threshold values. The technique relies on the peak value from the graph of the number object versus specific range of threshold values. The proposed approach has improved the overall performance compared to current optimal threshold techniques. Further improvement on this method is in progress to accommodate real time system specification.
Abstract: The optimal control problem for the viscoelastic melt
spinning process has not been reported yet in the literature. In this
study, an optimal control problem for a mathematical model of a
viscoelastic melt spinning process is considered. Maxwell-Oldroyd
model is used to describe the rheology of the polymeric material, the
fiber is made of. The extrusion velocity of the polymer at the spinneret
as well as the velocity and the temperature of the quench air and the
fiber length serve as control variables. A constrained optimization
problem is derived and the first–order optimality system is set up
to obtain the adjoint equations. Numerical solutions are carried out
using a steepest descent algorithm. A computer program in MATLAB
is developed for simulations.
Abstract: Due to the recovering global economy, enterprises are
increasingly focusing on logistics. Investing in logistic measures for
a production generates a large potential for achieving a good starting
point within a competitive field. Unlike during the global economic
crisis, enterprises are now challenged with investing available capital
to maximize profits. In order to be able to create an informed and
quantifiably comprehensible basis for a decision, enterprises need an
adequate model for logistically and monetarily evaluating measures
in production. The Collaborate Research Centre 489 (SFB 489) at the
Institute for Production Systems (IFA) developed a Logistic
Information System which provides support in making decisions and
is designed specifically for the forging industry. The aim of a project
that has been applied for is to now transfer this process in order to
develop a universal approach to logistically and monetarily evaluate
measures in production.
Abstract: In this paper is investigated a possible
optimization of some linear algebra problems which can be
solved by parallel processing using the special arrays called
systolic arrays. In this paper are used some special types of
transformations for the designing of these arrays. We show
the characteristics of these arrays. The main focus is on
discussing the advantages of these arrays in parallel
computation of matrix product, with special approach to the
designing of systolic array for matrix multiplication.
Multiplication of large matrices requires a lot of
computational time and its complexity is O(n3 ). There are
developed many algorithms (both sequential and parallel) with
the purpose of minimizing the time of calculations. Systolic
arrays are good suited for this purpose. In this paper we show
that using an appropriate transformation implicates in finding
more optimal arrays for doing the calculations of this type.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: Most routing protocols (DSR, AODV etc.) that have
been designed for wireless adhoc networks incorporate the broadcasting
operation in their route discovery scheme. Probabilistic broadcasting
techniques have been developed to optimize the broadcast operation
which is otherwise very expensive in terms of the redundancy
and the traffic it generates. In this paper we have explored percolation
theory to gain a different perspective on probabilistic broadcasting
schemes which have been actively researched in the recent years.
This theory has helped us estimate the value of broadcast probability
in a wireless adhoc network as a function of the size of the network.
We also show that, operating at those optimal values of broadcast
probability there is at least 25-30% reduction in packet regeneration
during successful broadcasting.