Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: The multiple traveling salesman problem (mTSP) can be used to model many practical problems. The mTSP is more complicated than the traveling salesman problem (TSP) because it requires determining which cities to assign to each salesman, as well as the optimal ordering of the cities within each salesman's tour. Previous studies proposed that Genetic Algorithm (GA), Integer Programming (IP) and several neural network (NN) approaches could be used to solve mTSP. This paper compared the results for mTSP, solved with Genetic Algorithm (GA) and Nearest Neighbor Algorithm (NNA). The number of cities is clustered into a few groups using k-means clustering technique. The number of groups depends on the number of salesman. Then, each group is solved with NNA and GA as an independent TSP. It is found that k-means clustering and NNA are superior to GA in terms of performance (evaluated by fitness function) and computing time.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
Abstract: Over the past years, the EMCCD has had a profound
influence on photon starved imaging applications relying on its unique
multiplication register based on the impact ionization effect in the
silicon. High signal-to-noise ratio (SNR) means high image quality.
Thus, SNR improvement is important for the EMCCD. This work
analyzes the SNR performance of an EMCCD with gain off and on. In
each mode, simplified SNR models are established for different
integration times. The SNR curves are divided into readout noise (or
CIC) region and shot noise region by integration time. Theoretical
SNR values comparing long frame integration and frame adding in
each region are presented and discussed to figure out which method is
more effective. In order to further improve the SNR performance,
pixel binning is introduced into the EMCCD. The results show that
pixel binning does obviously improve the SNR performance, but at the
expensive of the spatial resolution.
Abstract: This paper presents Faults Forecasting System (FFS)
that utilizes statistical forecasting techniques in analyzing process
variables data in order to forecast faults occurrences. FFS is
proposing new idea in detecting faults. Current techniques used in
faults detection are based on analyzing the current status of the
system variables in order to check if the current status is fault or not.
FFS is using forecasting techniques to predict future timing for faults
before it happens. Proposed model is applying subset modeling
strategy and Bayesian approach in order to decrease dimensionality
of the process variables and improve faults forecasting accuracy. A
practical experiment, designed and implemented in Okayama
University, Japan, is implemented, and the comparison shows that
our proposed model is showing high forecasting accuracy and
BEFORE-TIME.
Abstract: In wireless sensor network (WSN) the use of mobile
sink has been attracting more attention in recent times. Mobile sinks
are more effective means of balancing load, reducing hotspot
problem and elongating network lifetime. The sensor nodes in WSN
have limited power supply, computational capability and storage and
therefore for continuous data delivery reliability becomes high
priority in these networks. In this paper, we propose a Reliable
Energy-efficient Data Dissemination (REDD) scheme for WSNs with
multiple mobile sinks. In this strategy, sink first determines the
location of source and then directly communicates with the source
using geographical forwarding. Every forwarding node (FN) creates a
local zone comprising some sensor nodes that can act as
representative of FN when it fails. Analytical and simulation study
reveals significant improvement in energy conservation and reliable
data delivery in comparison to existing schemes.
Abstract: Majority of Business Software Systems (BSS)
Development and Enhancement Projects (D&EP) fail to meet criteria
of their effectiveness, what leads to the considerable financial losses.
One of the fundamental reasons for such projects- exceptionally low
success rate are improperly derived estimates for their costs and time.
In the case of BSS D&EP these attributes are determined by the work
effort, meanwhile reliable and objective effort estimation still appears
to be a great challenge to the software engineering. Thus this paper is
aimed at presenting the most important synthetic conclusions coming
from the author-s own studies concerning the main factors of
effective BSS D&EP work effort estimation. Thanks to the rational
investment decisions made on the basis of reliable and objective
criteria it is possible to reduce losses caused not only by abandoned
projects but also by large scale of overrunning the time and costs of
BSS D&EP execution.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: This study considers the problem of determining
operation and maintenance schedules for a containership equipped
with components during its sailing according to a pre-determined
navigation schedule. The operation schedule, which specifies work
time of each component, determines the due-date of each maintenance
activity, and the maintenance schedule specifies the actual start
time of each maintenance activity. The main constraints are component
requirements, workforce availability, working time limitation,
and inter-maintenance time. To represent the problem mathematically,
a mixed integer programming model is developed. Then,
due to the problem complexity, we suggest a heuristic for the objective
of minimizing the sum of earliness and tardiness between the
due-date and the starting time of each maintenance activity. Computational
experiments were done on various test instances and the
results are reported.
Abstract: In this paper in consideration of each available
techniques deficiencies for speech recognition, an advanced method
is presented that-s able to classify speech signals with the high
accuracy (98%) at the minimum time. In the presented method, first,
the recorded signal is preprocessed that this section includes
denoising with Mels Frequency Cepstral Analysis and feature
extraction using discrete wavelet transform (DWT) coefficients; Then
these features are fed to Multilayer Perceptron (MLP) network for
classification. Finally, after training of neural network effective
features are selected with UTA algorithm.
Abstract: Sorghum flour was supplemented with 15 and 30%
chickpea flour. Sorghum flour and the supplement were fermented at
35 oC for 0, 8, 16, and 24 h. Changes in pH, titrable acidity, total
soluble solids, protein content, in vitro protein digestibility and
amino acid composition were investigated during fermentation and/or
after supplementation of sorghum flour with chickpea. The pH of the
fermenting material decreased sharply with a concomitant increase in
the titrable acidity. The total soluble solids remained unchanged with
progressive fermentation time. The protein content of sorghum
cultivar was found to be 9.27 and that of chickpea was 22.47%. The
protein content of sorghum cultivar after supplementation with15 and
30% chickpea was significantly (P ≤ 0.05) increased to 11.78 and
14.55%, respectively. The protein digestibility also increased after
fermentation from 13.35 to 30.59 and 40.56% for the supplements,
respectively. Further increment in protein content and digestibility
was observed when supplemented and unsupplemented samples were
fermented for different periods of time. Cooking of fermented
samples was found to increase the protein content slightly and
decreased digestibility for both supplements. Amino acid content of
fermented and fermented and cooked supplements was determined.
Supplementation was found to increase the lysine and therionine
content. Cooking following fermentation decreased lysine,
isoleucine, valine and sulfur containg amino acids.
Abstract: Sustainability in rural production system can only be achieved if it can suitably satisfy the local requirement as well as the outside demand with the changing time. With the increased pressure from the food sector in a globalised world, the agrarian economy
needs to re-organise its cultivable land system to be compatible with new management practices as well as the multiple needs of various stakeholders and the changing resource scenario. An attempt has been made to transform this problem into a multi-objective decisionmaking problem considering various objectives, resource constraints and conditional constraints. An interactive fuzzy multi-objective
programming approach has been used for such a purpose taking a
case study in Indian context to demonstrate the validity of the method.
Abstract: In this paper, we propose an improved 3D star skeleton
technique, which is a suitable skeletonization for human posture representation
and reflects the 3D information of human posture.
Moreover, the proposed technique is simple and then can be performed
in real-time. The existing skeleton construction techniques, such as
distance transformation, Voronoi diagram, and thinning, focus on the
precision of skeleton information. Therefore, those techniques are not
applicable to real-time posture recognition since they are computationally
expensive and highly susceptible to noise of boundary. Although
a 2D star skeleton was proposed to complement these problems,
it also has some limitations to describe the 3D information of the
posture. To represent human posture effectively, the constructed skeleton
should consider the 3D information of posture. The proposed 3D
star skeleton contains 3D data of human, and focuses on human action
and posture recognition. Our 3D star skeleton uses the 8 projection
maps which have 2D silhouette information and depth data of human
surface. And the extremal points can be extracted as the features of 3D
star skeleton, without searching whole boundary of object. Therefore,
on execution time, our 3D star skeleton is faster than the “greedy" 3D
star skeleton using the whole boundary points on the surface. Moreover,
our method can offer more accurate skeleton of posture than the
existing star skeleton since the 3D data for the object is concerned.
Additionally, we make a codebook, a collection of representative 3D
star skeletons about 7 postures, to recognize what posture of constructed
skeleton is.
Abstract: Soil microbial activity is adversely affected by pollutants such as heavy metals, antibiotics and pesticides. Organic amendments including sewage sludge, municipal compost and vermicompost are recently used to improve soil structure and fertility. But, these materials contain heavy metals including Pb, Cd, Zn, Ni and Cu that are toxic to soil microorganisms and may lead to occurrence of more tolerant microbes. Among these, Pb is the most abundant and has more negative effect on soil microbial ecology. In this study, Pb levels of 0, 100, 200, 300, 400 and 500 mg Pb [as Pb(NO3)2] per kg soil were added to the pots containing 2 kg of a loamy soil and incubated for 6 months at 25°C with soil moisture of - 0.3 MPa. Dehydrogenase activity of soil as a measure of microbial activity was determined on 15, 30, 90 and 180 days after incubation. Triphenyl tetrazolium chloride (TTC) was used as an electron acceptor in this assay. PICTs (IC50 values) were calculated for each Pb level and incubation time. Soil microbial activity was decreased by increasing Pb level during 30 days of incubation but the induced tolerance appeared on day 90 and thereafter. During 90 to 180 days of incubation, the PICT was gradually developed by increasing Pb level up to 200 mg kg-1, but the rate of enhancement was steeper at higher concentrations.
Abstract: Intradiscal and intervertebral pressure transducers
were developed. They were used to map the pressures in the nucleus
and within the annulus of the human spinal segments. Their stressrelaxation
were recorded over a period of time for nucleus
pressure, applied load, and peripherial strain against time. The
results show that for normal discs, pressures in the nucleus are
viscoelastic in nature with the applied compressive load.
Mechanical strains which develop around the periphery of the
vertebral body are also viscoelastic with the applied compressive
load. Applied compressive load against time also shows viscoelastic
behavior. However, annulus does not respond viscoelastically with
the applied load. It showed a linear response to compressive loading.
Abstract: In the normal operation conditions of a pico satellite,
conventional Unscented Kalman Filter (UKF) gives sufficiently good
estimation results. However, if the measurements are not reliable
because of any kind of malfunction in the estimation system, UKF
gives inaccurate results and diverges by time. This study, introduces
Robust Unscented Kalman Filter (RUKF) algorithms with the filter
gain correction for the case of measurement malfunctions. By the use
of defined variables named as measurement noise scale factor, the
faulty measurements are taken into the consideration with a small
weight and the estimations are corrected without affecting the
characteristic of the accurate ones. Two different RUKF algorithms,
one with single scale factor and one with multiple scale factors, are
proposed and applied for the attitude estimation process of a pico
satellite. The results of these algorithms are compared for different
types of measurement faults in different estimation scenarios and
recommendations about their applications are given.
Abstract: Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In Neural Network that address classification problems, training set, testing set, learning rate are considered as key tasks. That is collection of input/output patterns that are used to train the network and used to assess the network performance, set the rate of adjustments. This paper describes a proposed back propagation neural net classifier that performs cross validation for original Neural Network. In order to reduce the optimization of classification accuracy, training time. The feasibility the benefits of the proposed approach are demonstrated by means of five data sets like contact-lenses, cpu, weather symbolic, Weather, labor-nega-data. It is shown that , compared to exiting neural network, the training time is reduced by more than 10 times faster when the dataset is larger than CPU or the network has many hidden units while accuracy ('percent correct') was the same for all datasets but contact-lences, which is the only one with missing attributes. For contact-lences the accuracy with Proposed Neural Network was in average around 0.3 % less than with the original Neural Network. This algorithm is independent of specify data sets so that many ideas and solutions can be transferred to other classifier paradigms.
Abstract: Overhead conveyor systems satisfy by their simple
construction, wide application range and their full compatibility with
other manufacturing systems, which are designed according to
international standards. Ultra-light overhead conveyor systems are
rope-based conveying systems with individually driven vehicles. The
vehicles can move automatically on the rope and this can be realized
by energy and signals. Crossings are realized by switches. Overhead
conveyor systems are particularly used in the automotive industry but
also at post offices. Overhead conveyor systems always must be
integrated with a logistical process by finding the best way for a
cheaper material flow and in order to guarantee precise and fast
workflows. With their help, any transport can take place without
wasting ground and space, without excessive company capacity, lost
or damaged products, erroneous delivery, endless travels and without
wasting time. Ultra-light overhead conveyor systems provide optimal
material flow, which produces profit and saves time. This article
illustrates the advantages of the structure of the ultra-light overhead
conveyor systems in logistics applications and explains the steps of
their system design. After an illustration of the steps, currently
available systems on the market will be shown by means of their
technical characteristics. Due to their simple construction, demands
to an ultra-light overhead conveyor system will be illustrated.
Abstract: This paper considers the problem of scheduling maintenance actions for identical aircraft gas turbine engines. Each one of the turbines consists of parts which frequently require replacement. A finite inventory of spare parts is available and all parts are ready for replacement at any time. The inventory consists of both new and refurbished parts. Hence, these parts have different field lives. The goal is to find a replacement part sequencing that maximizes the time that the aircraft will keep functioning before the inventory is replenished. The problem is formulated as an identical parallel machine scheduling problem where the minimum completion time has to be maximized. Two models have been developed. The first one is an optimization model which is based on a 0-1 linear programming formulation, while the second one is an approximate procedure which consists in decomposing the problem into several two-machine subproblems. Each subproblem is optimally solved using the first model. Both models have been implemented using Lingo and have been tested on two sets of randomly generated data with up to 150 parts and 10 turbines. Experimental results show that the optimization model is able to solve only instances with no more than 4 turbines, while the decomposition procedure often provides near-optimal solutions within a maximum CPU time of 3 seconds.