Abstract: Many systems in the natural world exhibit chaos or non-linear behavior, the complexity of which is so great that they appear to be random. Identification of chaos in experimental data is essential for characterizing the system and for analyzing the predictability of the data under analysis. The Lyapunov exponents provide a quantitative measure of the sensitivity to initial conditions and are the most useful dynamical diagnostic for chaotic systems. However, it is difficult to accurately estimate the Lyapunov exponents of chaotic signals which are corrupted by a random noise. In this work, a method for estimation of Lyapunov exponents from noisy time series using unscented transformation is proposed. The proposed methodology was validated using time series obtained from known chaotic maps. In this paper, the objective of the work, the proposed methodology and validation results are discussed in detail.
Abstract: The detection of outliers is very essential because of
their responsibility for producing huge interpretative problem in
linear as well as in nonlinear regression analysis. Much work has
been accomplished on the identification of outlier in linear
regression, but not in nonlinear regression. In this article we propose
several outlier detection techniques for nonlinear regression. The
main idea is to use the linear approximation of a nonlinear model and
consider the gradient as the design matrix. Subsequently, the
detection techniques are formulated. Six detection measures are
developed that combined with three estimation techniques such as the
Least-Squares, M and MM-estimators. The study shows that among
the six measures, only the studentized residual and Cook Distance
which combined with the MM estimator, consistently capable of
identifying the correct outliers.
Abstract: Evaporator is an important and widely used heat
exchanger in air conditioning and refrigeration industries. Different
methods have been used by investigators to increase the heat transfer
rates in evaporators. One of the passive techniques to enhance heat
transfer coefficient is the application of microfin tubes. The
mechanism of heat transfer augmentation in microfin tubes is
dependent on the flow regime of two-phase flow. Therefore many
investigations of the flow patterns for in-tube evaporation have been
reported in literatures. The gravitational force, surface tension and
the vapor-liquid interfacial shear stress are known as three dominant
factors controlling the vapor and liquid distribution inside the tube. A
review of the existing literature reveals that the previous
investigations were concerned with the two-phase flow pattern for
flow boiling in horizontal tubes [12], [9]. Therefore, the objective of
the present investigation is to obtain information about the two-phase
flow patterns for evaporation of R-134a inside horizontal smooth and
microfin tubes. Also Investigation of heat transfer during flow
boiling of R-134a inside horizontal microfin and smooth tube have
been carried out experimentally The heat transfer coefficients for
annular flow in the smooth tube is shown to agree well with Gungor
and Winterton-s correlation [4]. All the flow patterns occurred in the
test can be divided into three dominant regimes, i.e., stratified-wavy
flow, wavy-annular flow and annular flow. Experimental data are
plotted in two kinds of flow maps, i.e., Weber number for the vapor
versus weber number for the liquid flow map and mass flux versus
vapor quality flow map. The transition from wavy-annular flow to
annular or stratified-wavy flow is identified in the flow maps.
Abstract: The occurrence of missing values in database is a serious problem for Data Mining tasks, responsible for degrading data quality and accuracy of analyses. In this context, the area has shown a lack of standardization for experiments to treat missing values, introducing difficulties to the evaluation process among different researches due to the absence in the use of common parameters. This paper proposes a testbed intended to facilitate the experiments implementation and provide unbiased parameters using available datasets and suited performance metrics in order to optimize the evaluation and comparison between the state of art missing values treatments.
Abstract: In practice, wireless networks has the property that
the signal strength attenuates with respect to the distance from the
base station, it could be better if the nodes at two hop away are
considered for better quality of service. In this paper, we propose a
procedure to identify delay preserving substructures for a given
wireless ad-hoc network using a new graph operation G 2 – E (G) =
G* (Edge difference of square graph of a given graph and the
original graph). This operation helps to analyze some induced
substructures, which preserve delay in communication among them.
This operation G* on a given graph will induce a graph, in which 1-
hop neighbors of any node are at 2-hop distance in the original
network. In this paper, we also identify some delay preserving
substructures in G*, which are (i) set of all nodes, which are mutually
at 2-hop distance in G that will form a clique in G*, (ii) set of nodes
which forms an odd cycle C2k+1 in G, will form an odd cycle in G*
and the set of nodes which form a even cycle C2k in G that will form
two disjoint companion cycles ( of same parity odd/even) of length k
in G*, (iii) every path of length 2k+1 or 2k in G will induce two
disjoint paths of length k in G*, and (iv) set of nodes in G*, which
induces a maximal connected sub graph with radius 1 (which
identifies a substructure with radius equal 2 and diameter at most 4 in
G). The above delay preserving sub structures will behave as good
clusters in the original network.
Abstract: This paper presents a novel algorithm for path planning of mobile robots in known 3D environments using Binary Integer Programming (BIP). In this approach the problem of path planning is formulated as a BIP with variables taken from 3D Delaunay Triangulation of the Free Configuration Space and solved to obtain an optimal channel made of connected tetrahedrons. The 3D channel is then partitioned into convex fragments which are used to build safe and short paths within from Start to Goal. The algorithm is simple, complete, does not suffer from local minima, and is applicable to different workspaces with convex and concave polyhedral obstacles. The noticeable feature of this algorithm is that it is simply extendable to n-D Configuration spaces.
Abstract: Since supply chains highly impact the financial
performance of companies, it is important to optimize and analyze
their Key Performance Indicators (KPI). The synergistic combination
of Particle Swarm Optimization (PSO) and Monte Carlo simulation is
applied to determine the optimal reorder point of warehouses in
supply chains. The goal of the optimization is the minimization of the
objective function calculated as the linear combination of holding and
order costs. The required values of service levels of the warehouses
represent non-linear constraints in the PSO. The results illustrate that
the developed stochastic simulator and optimization tool is flexible
enough to handle complex situations.
Abstract: This paper presents the modeling of a MEMS based accelerometer in order to detect the presence of a wheel flat in the railway vehicle. A haversine wheel flat is assigned to one wheel of a 5 DOF pitch plane vehicle model, which is coupled to a 3 layer track model. Based on the simulated acceleration response obtained from the vehicle-track model, an accelerometer is designed that meets all the requirements to detect the presence of a wheel flat. The proposed accelerometer can survive in a dynamic shocking environment with acceleration up to ±150g. The parameters of the accelerometer are calculated in order to achieve the required specifications using lumped element approximation and the results are used for initial design layout. A finite element analysis code (COMSOL) is used to perform simulations of the accelerometer under various operating conditions and to determine the optimum configuration. The simulated results are found within about 2% of the calculated values, which indicates the validity of lumped element approach. The stability of the accelerometer is also determined in the desired range of operation including the condition under shock.
Abstract: Many works have been carried out to compare the
efficiency of several goodness of fit procedures for identifying
whether or not a particular distribution could adequately explain a
data set. In this paper a study is conducted to investigate the power
of several goodness of fit tests such as Kolmogorov Smirnov (KS),
Anderson-Darling(AD), Cramer- von- Mises (CV) and a proposed
modification of Kolmogorov-Smirnov goodness of fit test which
incorporates a variance stabilizing transformation (FKS). The
performances of these selected tests are studied under simple
random sampling (SRS) and Ranked Set Sampling (RSS). This
study shows that, in general, the Anderson-Darling (AD) test
performs better than other GOF tests. However, there are some
cases where the proposed test can perform as equally good as the
AD test.
Abstract: Preliminary results for a new flat plate test
facility are presented here in the form of Computational Fluid Dynamics (CFD), flow visualisation, pressure measurements and thermal anemometry. The results from the CFD and flow
visualisation show the effectiveness of the plate design, with the trailing edge flap anchoring the stagnation point on the working surface and reducing the extent of the leading edge separation. The flow visualization technique demonstrates the
two-dimensionality of the flow in the location where the
thermal anemometry measurements are obtained.
Measurements of the boundary layer mean velocity profiles compare favourably with the Blasius solution, thereby allowing for comparison of future measurements with the
wealth of data available on zero pressure gradient Blasius
flows. Results for the skin friction, boundary layer thickness,
frictional velocity and wall shear stress are shown to agree well with the Blasius theory, with a maximum experimental deviation from theory of 5%. Two turbulence generating grids
have been designed and characterized and it is shown that the turbulence decay downstream of both grids agrees with established correlations. It is also demonstrated that there is
little dependence of turbulence on the freestream velocity.
Abstract: Integrated Total Quality Management (TQM) with
Lean Manufacturing (LM) is a system comprises of TQM with LM
principles and is associated with financial and nonfinancial
performance measurement indicators. The ultimate goal of this
system is to focus on achieving total customer satisfaction by
removing eight wastes available in any process in an organization.
A survey questionnaire was developed and distributed to 30 highly
active automotive vendors in Malaysia and analyzed by PASW
Statistics 18. It was found out that these vendors have been
practicing and measuring the effectiveness TQM and LM
implementation. More involvement of all Malaysian automotive
vendors will represent the exact status of current Malaysian
automotive industry in implementing TQM and LM and can
determine whether the industry is ready for integrated TQM and
LM system. This is the first study that combined 4 awards
practices, ISO/TS16949, Toyota Production System and
SAEJ4000.
Abstract: Extensive information is required within a R&D environment,
and a considerable amount of time and efforts are being
spent on finding the necessary information. An adaptive information
providing system would be beneficial to the environment, and a
conceptual model of the resources, people and context is mandatory
for developing such applications. In this paper, an information model
on various contexts and resources is proposed which provides the
possibility of effective applications for use in adaptive information
systems within a R&D project and meeting environment.
Abstract: In this paper, Novel method, Particle Swarm Optimization (PSO) algorithm, based technique is proposed to estimate and analyze the steady state performance of self-excited induction generator (SEIG). In this novel method the tedious job of deriving the complex coefficients of a polynomial equation and solving it, as in previous methods, is not required. By comparing the simulation results obtained by the proposed method with those obtained by the well known mathematical methods, a good agreement between these results is obtained. The comparison validates the effectiveness of the proposed technique.
Abstract: Information hiding for authenticating and verifying the content integrity of the multimedia has been exploited extensively in the last decade. We propose the idea of using genetic algorithm and non-deterministic dependence by involving the un-watermarkable coefficients for digital image authentication. Genetic algorithm is used to intelligently select coefficients for watermarking in a DCT based image authentication scheme, which implicitly watermark all the un-watermarkable coefficients also, in order to thwart different attacks. Experimental results show that such intelligent selection results in improvement of imperceptibility of the watermarked image, and implicit watermarking of all the coefficients improves security against attacks such as cover-up, vector quantization and transplantation.
Abstract: Carrier mobility has become the most important
characteristic of high speed low dimensional devices. Due to
development of very fast switching semiconductor devices, speed of
computer and communication equipment has been increasing day by
day and will continue to do so in future. As the response of any
device depends on the carrier motion within the devices, extensive
studies of carrier mobility in the devices has been established
essential for the growth in the field of low dimensional devices.
Small-signal ac transport of degenerate two-dimensional hot
electrons in GaAs quantum wells is studied here incorporating
deformation potential acoustic, polar optic and ionized impurity
scattering in the framework of heated drifted Fermi-Dirac carrier
distribution. Delta doping is considered in the calculations to
investigate the effects of double delta doping on millimeter and submillimeter
wave response of two dimensional hot electrons in GaAs
nanostructures. The inclusion of delta doping is found to enhance
considerably the two dimensional electron density which in turn
improves the carrier mobility (both ac and dc) values in the GaAs
quantum wells thereby providing scope of getting higher speed
devices in future.
Abstract: Nowadays social media are important tools for web
resource discovery. The performance and capabilities of web searches
are vital, especially search results from social research paper
bookmarking. This paper proposes a new algorithm for ranking
method that is a combination of similarity ranking with paper posted
time or CSTRank. The paper posted time is static ranking for
improving search results. For this particular study, the paper posted
time is combined with similarity ranking to produce a better ranking
than other methods such as similarity ranking or SimRank. The
retrieval performance of combination rankings is evaluated using
mean values of NDCG. The evaluation in the experiments implies
that the chosen CSTRank ranking by using weight score at ratio 90:10
can improve the efficiency of research paper searching on social
bookmarking websites.
Abstract: Electronic Systems are the core of everyday lives.
They form an integral part in financial networks, mass transit,
telephone systems, power plants and personal computers. Electronic
systems are increasingly based on complex VLSI (Very Large Scale
Integration) integrated circuits. Initial electronic design automation is
concerned with the design and production of VLSI systems. The next
important step in creating a VLSI circuit is Physical Design. The
input to the physical design is a logical representation of the system
under design. The output of this step is the layout of a physical
package that optimally or near optimally realizes the logical
representation. Physical design problems are combinatorial in nature
and of large problem sizes. Darwin observed that, as variations are
introduced into a population with each new generation, the less-fit
individuals tend to extinct in the competition of basic necessities.
This survival of fittest principle leads to evolution in species. The
objective of the Genetic Algorithms (GA) is to find an optimal
solution to a problem .Since GA-s are heuristic procedures that can
function as optimizers, they are not guaranteed to find the optimum,
but are able to find acceptable solutions for a wide range of
problems. This survey paper aims at a study on Efficient Algorithms
for VLSI Physical design and observes the common traits of the
superior contributions.
Abstract: Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.
Abstract: The two significant overvoltages in power system,
switching overvoltage and lightning overvoltage, are investigated in
this paper. Firstly, the effect of various power system parameters on
Line Energization overvoltages is evaluated by simulation in ATP.
The dominant parameters include line parameters; short-circuit
impedance and circuit breaker parameters. Solutions to reduce
switching overvoltages are reviewed and controlled closing using
switchsync controllers is proposed as proper method.
This paper also investigates lightning overvoltages in the
overhead-cable transition. Simulations are performed in
PSCAD/EMTDC. Surge arresters are applied in both ends of cable to
fulfill the insulation coordination. The maximum amplitude of
overvoltages inside the cable is surveyed which should be of great
concerns in insulation coordination studies.
Abstract: This paper proposes a meta-heuristic called Ant Colony Optimization to solve multi-objective production problems. The multi-objective function is to minimize lead time and work in process. The problem is related to the decision variables, i.e.; distance and process time. According to decision criteria, the mathematical model is formulated. In order to solve the model an ant colony optimization approach has been developed. The proposed algorithm is parameterized by the number of ant colonies and the number of pheromone trails. One example is given to illustrate the effectiveness of the proposed model. The proposed formulations; Max-Min Ant system are then used to solve the problem and the results evaluate the performance and efficiency of the proposed algorithm using simulation.