Abstract: A number of routing algorithms based on learning
automata technique have been proposed for communication
networks. How ever, there has been little work on the effects of
variation of graph scarcity on the performance of these algorithms. In
this paper, a comprehensive study is launched to investigate the
performance of LASPA, the first learning automata based solution to
the dynamic shortest path routing, across different graph structures
with varying scarcities. The sensitivity of three main performance
parameters of the algorithm, being average number of processed
nodes, scanned edges and average time per update, to variation in
graph scarcity is reported. Simulation results indicate that the LASPA
algorithm can adapt well to the scarcity variation in graph structure
and gives much better outputs than the existing dynamic and fixed
algorithms in terms of performance criteria.
Abstract: The problem of exponential stability and periodicity for a class of cellular neural networks (DCNNs) with time-varying delays is investigated. By dividing the network state variables into subgroups according to the characters of the neural networks, some sufficient conditions for exponential stability and periodicity are derived via the methods of variation parameters and inequality techniques. These conditions are represented by some blocks of the interconnection matrices. Compared with some previous methods, the method used in this paper does not resort to any Lyapunov function, and the results derived in this paper improve and generalize some earlier criteria established in the literature cited therein. Two examples are discussed to illustrate the main results.
Abstract: Design and land use are closely linked to the
energy efficiency levels for an urban area. The current city
planning practice does not involve an effective land useenergy
evaluation in its 'blueprint' urban plans. The study
proposed an appraisal method that can be embedded in GIS
programs using five planning criteria as how far a planner can
give away from the planning principles (criteria) for the most
energy output s/he can obtain. The case of Balcova, a district
in the Izmir Metropolitan area, is used conformingly for
evaluating the proposed master plan and the geothermal
energy (heating only) use for the concern district.
If the land use design were proposed accordingly at-most
energy efficiency (a 30% obtained), mainly increasing the
density around the geothermal wells and also proposing more
mixed use zones, we could have 17% distortion (infidelity to
the main planning principles) from the original plan. The
proposed method can be an effective tool for planners as
simulation media, of which calculations can be made by GIS
ready tools, to evaluate efficiency levels for different plan
proposals, letting to know how much energy saving causes
how much deviation from the other planning ideals. Lower
energy uses can be possible for different land use proposals
for various policy trials.
Abstract: Environmental aspects plays a central role in environmental management system (EMS) because it is the basis for the identification of an organization-s environmental targets. The
existing methods for the assessment of environmental aspects are grouped into three categories: risk assessment-based (RA-based),
LCA-based and criterion-based methods. To combine the benefits of
these three categories of research, this study proposes an integrated framework, combining RA-, LCA- and criterion-based methods. The
integrated framework incorporates LCA techniques for the identification of the causal linkage for aspect, pathway, receptor and
impact, uses fuzzy logic to assess aspects, considers fuzzy conditions,
in likelihood assessment, and employs a new multi-criteria decision analysis method - multi-criteria and multi-connection comprehensive
assessment (MMCA) - to estimate significant aspects in EMS. The proposed model is verified, using a real case study and the results show
that this method successfully prioritizes the environmental aspects.
Abstract: Multi-site damage (MSD) has been a challenge to
aircraft, civil and power plant structures. In real life components are subjected to cracking at many vulnerable locations such as the bolt
holes. However, we do not consider for the presence of multiple cracks. Unlike components with a single crack, these components are
difficult to predict. When two cracks approach one another, their
stress fields influence each other and produce enhancing or shielding effect depending on the position of the cracks. In the present study,
numerical studies on fracture analysis have been conducted by using
the developed code based on the modified virtual crack closure integral (MVCCI) technique and finite element analysis (FEA) software ABAQUS for computing SIF of plates with multiple cracks.
Various parametric studies have been carried out and the results have
been compared with literature where ever available and also with the solution, obtained by using ABAQUS. By conducting extensive
numerical studies expressions for SIF have been obtained for collinear cracks and non-aligned cracks.
Abstract: The contents of nitrates and nitrites were monitored in
15 ground water resources of a selected region earmarked for the
emergency supply of population. The resources have been selected on
the basis of previous assessment of natural conditions and the
exploitation of territory in the infiltration area as well as the
surroundings of water resources. The health risk analysis carried out
in relation to nitrates and nitrites, which were found to be the most
serious water contaminants, proved, that 14 resources met the health
standards in relation to the assessed criterion and could be included in
crisis plans. Water quality of ground resources may be assessed in the
same way with regard to other contaminants.
Abstract: This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: An iterative definition of any n variable mean function is given in this article, which iteratively uses the two-variable form of the corresponding two-variable mean function. This extension method omits recursivity which is an important improvement compared with certain recursive formulas given before by Ando-Li-Mathias, Petz- Temesi. Furthermore it is conjectured here that this iterative algorithm coincides with the solution of the Riemann centroid minimization problem. Certain simulations are given here to compare the convergence rate of the different algorithms given in the literature. These algorithms will be the gradient and the Newton mehod for the Riemann centroid computation.
Abstract: Exclusive breastfeeding is the feeding of a baby on no other milk apart from breast milk. Exclusive breastfeeding during the first 6 months of life is of fundamental importance because it supports optimal growth and development during infancy and reduces the risk of obliterating diseases and problems. Moreover, in developed countries, exclusive breastfeeding has decreased the incidence and/or severity of diarrhea, lower respiratory infection and urinary tract infection. In this paper, we study the factors that influence exclusive breastfeeding and use the Generalized Poisson regression model to analyze the practices of exclusive breastfeeding in Mauritius. We develop two sets of quasi-likelihood equations (QLE)to estimate the parameters.
Abstract: Cameron Highlands is a mountainous area subjected
to torrential tropical showers. It extracts 5.8 million liters of water
per day for drinking supply from its rivers at several intake points.
The water quality of rivers in Cameron Highlands, however, has
deteriorated significantly due to land clearing for agriculture,
excessive usage of pesticides and fertilizers as well as construction
activities in rapidly developing urban areas. On the other hand, these
pollution sources known as non-point pollution sources are diverse
and hard to identify and therefore they are difficult to estimate.
Hence, Geographical Information Systems (GIS) was used to provide
an extensive approach to evaluate landuse and other mapping
characteristics to explain the spatial distribution of non-point sources
of contamination in Cameron Highlands. The method to assess
pollution sources has been developed by using Cameron Highlands
Master Plan (2006-2010) for integrating GIS, databases, as well as
pollution loads in the area of study. The results show highest annual
runoff is created by forest, 3.56 × 108 m3/yr followed by urban
development, 1.46 × 108 m3/yr. Furthermore, urban development
causes highest BOD load (1.31 × 106 kgBOD/yr) while agricultural
activities and forest contribute the highest annual loads for
phosphorus (6.91 × 104 kgP/yr) and nitrogen (2.50 × 105 kgN/yr),
respectively. Therefore, best management practices (BMPs) are
suggested to be applied to reduce pollution level in the area.
Abstract: The vehicle routing problem (VRP) is a famous combinatorial optimization problem. Because of its well-known difficulty, metaheuristics are the most appropriate methods to tackle large and realistic instances. The goal of this paper is to highlight the key ideas for designing VRP metaheuristics according to the following criteria: efficiency, speed, robustness, and ability to take advantage of the problem structure. Such elements can obviously be used to build solution methods for other combinatorial optimization problems, at least in the deterministic field.
Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: The Comparison analysis of the Wald-s and Bayestype sequential methods for testing hypotheses is offered. The merits of the new sequential test are: universality which consists in optimality (with given criteria) and uniformity of decision-making regions for any number of hypotheses; simplicity, convenience and uniformity of the algorithms of their realization; reliability of the obtained results and an opportunity of providing the errors probabilities of desirable values. There are given the Computation results of concrete examples which confirm the above-stated characteristics of the new method and characterize the considered methods in regard to each other.
Abstract: This paper presents the convergence analysis
of a prediction based blind equalizer for IIR channels.
Predictor parameters are estimated by using the recursive
least squares algorithm. It is shown that the prediction
error converges almost surely (a.s.) toward a scalar
multiple of the unknown input symbol sequence. It is
also proved that the convergence rate of the parameter
estimation error is of the same order as that in the iterated
logarithm law.
Abstract: In this paper, we have proposed a Haar wavelet quasilinearization
method to solve the well known Blasius equation. The
method is based on the uniform Haar wavelet operational matrix
defined over the interval [0, 1]. In this method, we have proposed the
transformation for converting the problem on a fixed computational
domain. The Blasius equation arises in the various boundary layer
problems of hydrodynamics and in fluid mechanics of laminar
viscous flows. Quasi-linearization is iterative process but our
proposed technique gives excellent numerical results with quasilinearization
for solving nonlinear differential equations without any
iteration on selecting collocation points by Haar wavelets. We have
solved Blasius equation for 1≤α ≤ 2 and the numerical results are
compared with the available results in literature. Finally, we
conclude that proposed method is a promising tool for solving the
well known nonlinear Blasius equation.
Abstract: This paper presents kinematic and dynamic analysis of a novel 8-DOF hybrid robot manipulator. The hybrid robot manipulator under consideration consists of a parallel robot which
is followed by a serial mechanism. The parallel mechanism has three translational DOF, and the serial mechanism has five DOF so that the overall degree of freedom is eight. The introduced
manipulator has a wide workspace and a high capability to reduce
the actuating energy. The inverse and forward kinematic solutions are described in closed form. The theoretical results are verified by
a numerical example. Inverse dynamic analysis of the robot is presented by utilizing the Iterative Newton-Euler and Lagrange dynamic formulation methods. Finally, for performing a multi-step
arc welding process, results have indicated that the introduced manipulator is highly capable of reducing the actuating energy.
Abstract: Modelling techniques for a fluid coupling taken from
published literature have been extended to include the effects of the
filling and emptying of the coupling with oil and the variation in
losses when the coupling is partially full. In the model, the fluid flow
inside the coupling is considered to have two principal velocity
components; one circumferentially about the coupling axis
(centrifugal head) and the other representing the secondary vortex
within the coupling itself (vortex head). The calculation of liquid
mass flow rate circulating between the two halves of the coupling is
based on: the assumption of a linear velocity variation in the
circulating vortex flow; the head differential in the fluid due to the
speed difference between the two shafts; and the losses in the
circulating vortex flow as a result of the impingement of the flow
with the blades in the coupling and friction within the passages
between the blades.
Abstract: In this paper, fluid flow patterns of steady incompressible flow inside shear driven cavity are studied. The numerical simulations are conducted by using lattice Boltzmann method (LBM) for different Reynolds numbers. In order to simulate the flow, derivation of macroscopic hydrodynamics equations from the continuous Boltzmann equation need to be performed. Then, the numerical results of shear-driven flow inside square and triangular cavity are compared with results found in literature review. Present study found that flow patterns are affected by the geometry of the cavity and the Reynolds numbers used.