Abstract: Decrease in hardware costs and advances in computer
networking technologies have led to increased interest in the use of
large-scale parallel and distributed computing systems. One of the
biggest issues in such systems is the development of effective
techniques/algorithms for the distribution of the processes/load of a
parallel program on multiple hosts to achieve goal(s) such as
minimizing execution time, minimizing communication delays,
maximizing resource utilization and maximizing throughput.
Substantive research using queuing analysis and assuming job
arrivals following a Poisson pattern, have shown that in a multi-host
system the probability of one of the hosts being idle while other host
has multiple jobs queued up can be very high. Such imbalances in
system load suggest that performance can be improved by either
transferring jobs from the currently heavily loaded hosts to the lightly
loaded ones or distributing load evenly/fairly among the hosts .The
algorithms known as load balancing algorithms, helps to achieve the
above said goal(s). These algorithms come into two basic categories -
static and dynamic. Whereas static load balancing algorithms (SLB)
take decisions regarding assignment of tasks to processors based on
the average estimated values of process execution times and
communication delays at compile time, Dynamic load balancing
algorithms (DLB) are adaptive to changing situations and take
decisions at run time.
The objective of this paper work is to identify qualitative
parameters for the comparison of above said algorithms. In future this
work can be extended to develop an experimental environment to
study these Load balancing algorithms based on comparative
parameters quantitatively.
Abstract: Ultrathin (UTD) and Nanoscale (NSD) SOI-MOSFET devices, sharing a similar W/L but with a channel thickness of 46nm and 1.6nm respectively, were fabricated using a selective “gate recessed” process on the same silicon wafer. The electrical transport characterization at room temperature has shown a large difference between the two kinds of devices and has been interpreted in terms of a huge unexpected series resistance. Electrical characteristics of the Nanoscale device, taken in the linear region, can be analytically derived from the ultrathin device ones. A comparison of the structure and composition of the layers, using advanced techniques such as Focused Ion Beam (FIB) and High Resolution TEM (HRTEM) coupled with Energy Dispersive X-ray Spectroscopy (EDS), contributes an explanation as to the difference of transport between the devices.
Abstract: A method and apparatus for noninvasive measurement
of blood glucose concentration based on transilluminated laser beam
via the Index Finger has been reported in this paper. This method
depends on atomic gas (He-Ne) laser operating at 632.8nm
wavelength. During measurement, the index finger is inserted into the
glucose sensing unit, the transilluminated optical signal is converted
into an electrical signal, compared with the reference electrical
signal, and the obtained difference signal is processed by signal
processing unit which presents the results in the form of blood
glucose concentration. This method would enable the monitoring
blood glucose level of the diabetic patient continuously, safely and
noninvasively.
Abstract: This work deals with unsupervised image deblurring.
We present a new deblurring procedure on images provided by lowresolution
synthetic aperture radar (SAR) or simply by multimedia in
presence of multiplicative (speckle) or additive noise, respectively.
The method we propose is defined as a two-step process. First, we
use an original technique for noise reduction in wavelet domain.
Then, the learning of a Kohonen self-organizing map (SOM) is
performed directly on the denoised image to take out it the blur. This
technique has been successfully applied to real SAR images, and the
simulation results are presented to demonstrate the effectiveness of
the proposed algorithms.
Abstract: In our modern world, more physical transactions are being substituted by electronic transactions (i.e. banking, shopping, and payments), many businesses and companies are performing most of their operations through the internet. Instead of having a physical commerce, internet visitors are now adapting to electronic commerce (e-Commerce). The ability of web users to reach products worldwide can be greatly benefited by creating friendly and personalized online business portals. Internet visitors will return to a particular website when they can find the information they need or want easily. Dealing with this human conceptualization brings the incorporation of Artificial/Computational Intelligence techniques in the creation of customized portals. From these techniques, Fuzzy-Set technologies can make many useful contributions to the development of such a human-centered endeavor as e-Commerce. The main objective of this paper is the implementation of a Paradigm for the Intelligent Design and Operation of Human-Computer interfaces. In particular, the paradigm is quite appropriate for the intelligent design and operation of software modules that display information (such Web Pages, graphic user interfaces GUIs, Multimedia modules) on a computer screen. The human conceptualization of the user personal information is analyzed throughout a Cascaded Fuzzy Inference (decision-making) System to generate the User Ascribe Qualities, which identify the user and that can be used to customize portals with proper Web links.
Abstract: Diagnosis can be achieved by building a model of a
certain organ under surveillance and comparing it with the real time
physiological measurements taken from the patient. This paper deals
with the presentation of the benefits of using Data Mining techniques
in the computer-aided diagnosis (CAD), focusing on the cancer
detection, in order to help doctors to make optimal decisions quickly
and accurately. In the field of the noninvasive diagnosis techniques,
the endoscopic ultrasound elastography (EUSE) is a recent elasticity
imaging technique, allowing characterizing the difference between
malignant and benign tumors. Digitalizing and summarizing the main
EUSE sample movies features in a vector form concern with the use
of the exploratory data analysis (EDA). Neural networks are then
trained on the corresponding EUSE sample movies vector input in
such a way that these intelligent systems are able to offer a very
precise and objective diagnosis, discriminating between benign and
malignant tumors. A concrete application of these Data Mining
techniques illustrates the suitability and the reliability of this
methodology in CAD.
Abstract: The need in cognitive radio system for a simple, fast, and independent technique to sense the spectrum occupancy has led to the energy detection approach. Energy detector is known by its dependency on noise variation in the system which is one of its major drawbacks. In this paper, we are aiming to improve its performance by utilizing a weighted collaborative spectrum sensing, it is similar to the collaborative spectrum sensing methods introduced previously in the literature. These weighting methods give more improvement for collaborative spectrum sensing as compared to no weighting case. There is two method proposed in this paper: the first one depends on the channel status between each sensor and the primary user while the second depends on the value of the energy measured in each sensor.
Abstract: In this paper, the problem of stability analysis for a class of impulsive stochastic fuzzy neural networks with timevarying delays and reaction-diffusion is considered. By utilizing suitable Lyapunov-Krasovskii funcational, the inequality technique and stochastic analysis technique, some sufficient conditions ensuring global exponential stability of equilibrium point for impulsive stochastic fuzzy cellular neural networks with time-varying delays and diffusion are obtained. In particular, the estimate of the exponential convergence rate is also provided, which depends on system parameters, diffusion effect and impulsive disturbed intention. It is believed that these results are significant and useful for the design and applications of fuzzy neural networks. An example is given to show the effectiveness of the obtained results.
Abstract: The aim of every software product is to achieve an
appropriate level of software quality. Developers and designers are
trying to produce readable, reliable, maintainable, reusable and
testable code. To help achieve these goals, several approaches have
been utilized. In this paper, refactoring technique was used to
evaluate software quality with a quality index. It is composed of
different metric sets which describes various quality aspects.
Abstract: This paper proposes a new optimization techniques
for the optimization a gas processing plant uncertain feed and
product flows. The problem is first formulated using a continuous
linear deterministic approach. Subsequently, the single and joint
chance constraint models for steady state process with timedependent
uncertainties have been developed. The solution approach
is based on converting the probabilistic problems into their
equivalent deterministic form and solved at different confidence
levels Case study for a real plant operation has been used to
effectively implement the proposed model. The optimization results
indicate that prior decision has to be made for in-operating plant
under uncertain feed and product flows by satisfying all the
constraints at 95% confidence level for single chance constrained and
85% confidence level for joint chance constrained optimizations
cases.
Abstract: Over the past decade, mobile has experienced a
revolution that will ultimately change the way we communicate.All
these technologies have a common denominator exploitation of
computer information systems, but their operation can be tedious
because of problems with heterogeneous data sources.To overcome
the problems of heterogeneous data sources, we propose to use a
technique of adding an extra layer interfacing applications of
management or supervision at the different data sources.This layer
will be materialized by the implementation of a mediator between
different host applications and information systems frequently used
hierarchical and relational manner such that the heterogeneity is
completely transparent to the VoIP platform.
Abstract: Cooktop burners are widely used nowadays. In
cooktop burner design, nozzle efficiency and greenhouse
gas(GHG) emissions mainly depend on heat transfer from the
premixed flame to the impinging surface. This is a complicated
issue depending on the individual and combined effects of various
input combustion variables. Optimal operating conditions for
sustainable burner design were rarely addressed, especially in the
case of multiple slot-jet burners. Through evaluating the optimal
combination of combustion conditions for a premixed slot-jet
array, this paper develops a practical approach for the sustainable
design of gas cooktop burners. Efficiency, CO and NOx emissions
in respect of an array of slot jets using premixed flames were
analysed. Response surface experimental design were applied to
three controllable factors of the combustion process, viz.
Reynolds number, equivalence ratio and jet-to-vessel distance.
Desirability Function Approach(DFA) is the analytic technique
used for the simultaneous optimization of the efficiency and
emission responses.
Abstract: This paper describes the shape optimization of impeller
blades for a anti-heeling bidirectional axial flow pump used in ships.
In general, a bidirectional axial pump has an efficiency much lower
than the classical unidirectional pump because of the symmetry of the
blade type. In this paper, by focusing on a pump impeller, the shape of
blades is redesigned to reach a higher efficiency in a bidirectional axial
pump. The commercial code employed in this simulation is CFX v.13.
CFD result of pump torque, head, and hydraulic efficiency was
compared. The orthogonal array (OA) and analysis of variance
(ANOVA) techniques and surrogate model based optimization using
orthogonal polynomial, are employed to determine the main effects
and their optimal design variables. According to the optimal design,
we confirm an effective design variable in impeller blades and explain
the optimal solution, the usefulness for satisfying the constraints of
pump torque and head.
Abstract: The multiple traveling salesman problem (mTSP) can be used to model many practical problems. The mTSP is more complicated than the traveling salesman problem (TSP) because it requires determining which cities to assign to each salesman, as well as the optimal ordering of the cities within each salesman's tour. Previous studies proposed that Genetic Algorithm (GA), Integer Programming (IP) and several neural network (NN) approaches could be used to solve mTSP. This paper compared the results for mTSP, solved with Genetic Algorithm (GA) and Nearest Neighbor Algorithm (NNA). The number of cities is clustered into a few groups using k-means clustering technique. The number of groups depends on the number of salesman. Then, each group is solved with NNA and GA as an independent TSP. It is found that k-means clustering and NNA are superior to GA in terms of performance (evaluated by fitness function) and computing time.
Abstract: This paper presents an optimization technique to economic load dispatch (ELD) problems with considering the daily load patterns and generator constraints using a particle swarm optimization (PSO). The objective is to minimize the fuel cost. The optimization problem is subject to system constraints consisting of power balance and generation output of each units. The application of a constriction factor into PSO is a useful strategy to ensure convergence of the particle swarm algorithm. The proposed method is able to determine, the output power generation for all of the power generation units, so that the total constraint cost function is minimized. The performance of the developed methodology is demonstrated by case studies in test system of fifteen-generation units. The results show that the proposed algorithm scan give the minimum total cost of generation while satisfying all the constraints and benefiting greatly from saving in power loss reduction
Abstract: Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.
Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: This paper proposes a novel solution for optimizing
the size and communication overhead of a distributed multiagent
system without compromising the performance. The proposed approach
addresses the challenges of scalability especially when the
multiagent system is large. A modified spectral clustering technique
is used to partition a large network into logically related clusters.
Agents are assigned to monitor dedicated clusters rather than monitor
each device or node. The proposed scalable multiagent system is
implemented using JADE (Java Agent Development Environment)
for a large power system. The performance of the proposed topologyindependent
decentralized multiagent system and the scalable multiagent
system is compared by comprehensively simulating different
fault scenarios. The time taken for reconfiguration, the overall computational
complexity, and the communication overhead incurred are
computed. The results of these simulations show that the proposed
scalable multiagent system uses fewer agents efficiently, makes faster
decisions to reconfigure when a fault occurs, and incurs significantly
less communication overhead.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: This paper presents Faults Forecasting System (FFS)
that utilizes statistical forecasting techniques in analyzing process
variables data in order to forecast faults occurrences. FFS is
proposing new idea in detecting faults. Current techniques used in
faults detection are based on analyzing the current status of the
system variables in order to check if the current status is fault or not.
FFS is using forecasting techniques to predict future timing for faults
before it happens. Proposed model is applying subset modeling
strategy and Bayesian approach in order to decrease dimensionality
of the process variables and improve faults forecasting accuracy. A
practical experiment, designed and implemented in Okayama
University, Japan, is implemented, and the comparison shows that
our proposed model is showing high forecasting accuracy and
BEFORE-TIME.