Abstract: This interdisciplinary research aims to distinguish universal scale-free and field-like fundamental principles of selforganization observable across many disciplines like computer science, neuroscience, microbiology, social science, etc. Based on these universal principles we provide basic premises and postulates for designing holistic social simulation models. We also introduce pervasive information field (PIF) concept, which serves as a simulation media for contextual information storage, dynamic distribution and organization in social complex networks. PIF concept specifically is targeted for field-like uncoupled and indirect interactions among social agents capable of affecting and perceiving broadcasted contextual information. Proposed approach is expressive enough to represent contextual broadcasted information in a form locally accessible and immediately usable by network agents. This paper gives some prospective vision how system-s resources (tangible and intangible) could be simulated as oscillating processes immersed in the all pervasive information field.
Abstract: Many problems in computer vision and image
processing present potential for parallel implementations through one
of the three major paradigms of geometric parallelism, algorithmic
parallelism and processor farming. Static process scheduling
techniques are used successfully to exploit geometric and algorithmic
parallelism, while dynamic process scheduling is better suited to
dealing with the independent processes inherent in the process
farming paradigm. This paper considers the application of parallel or
multi-computers to a class of problems exhibiting spatial data
characteristic of the geometric paradigm. However, by using
processor farming paradigm, a dynamic scheduling technique is
developed to suit the MIMD structure of the multi-computers. A
hybrid scheme of scheduling is also developed and compared with
the other schemes. The specific problem chosen for the investigation
is the Hough transform for line detection.
Abstract: Load balancing in distributed computer systems is the
process of redistributing the work load among processors in the
system to improve system performance. Most of previous research in
using fuzzy logic for the purpose of load balancing has only
concentrated in utilizing fuzzy logic concepts in describing
processors load and tasks execution length. The responsibility of the
fuzzy-based load balancing process itself, however, has not been
discussed and in most reported work is assumed to be performed in a
distributed fashion by all nodes in the network. This paper proposes a
new fuzzy dynamic load balancing algorithm for homogenous
distributed systems. The proposed algorithm utilizes fuzzy logic in
dealing with inaccurate load information, making load distribution
decisions, and maintaining overall system stability. In terms of
control, we propose a new approach that specifies how, when, and by
which node the load balancing is implemented. Our approach is
called Centralized-But-Distributed (CBD).
Abstract: This paper presents a comparative analysis of a new
unsupervised PCA-based technique for steel plates texture segmentation
towards defect detection. The proposed scheme called Variance
Based Component Analysis or VBCA employs PCA for feature
extraction, applies a feature reduction algorithm based on variance of
eigenpictures and classifies the pixels as defective and normal. While
the classic PCA uses a clusterer like Kmeans for pixel clustering,
VBCA employs thresholding and some post processing operations to
label pixels as defective and normal. The experimental results show
that proposed algorithm called VBCA is 12.46% more accurate and
78.85% faster than the classic PCA.
Abstract: Wireless sensor networks (WSN) consists of many
sensor nodes that are placed on unattended environments such as
military sites in order to collect important information.
Implementing a secure protocol that can prevent forwarding forged
data and modifying content of aggregated data and has low delay
and overhead of communication, computing and storage is very
important. This paper presents a new protocol for concealed data
aggregation (CDA). In this protocol, the network is divided to
virtual cells, nodes within each cell produce a shared key to send
and receive of concealed data with each other. Considering to data
aggregation in each cell is locally and implementing a secure
authentication mechanism, data aggregation delay is very low and
producing false data in the network by malicious nodes is not
possible. To evaluate the performance of our proposed protocol, we
have presented computational models that show the performance
and low overhead in our protocol.
Abstract: E-services have significantly changed the way of
doing business in recent years. We can, however, observe poor use of
these services. There is a large gap between supply and actual eservices
usage. This is why we started a project to provide an
environment that will encourage the use of e-services. We believe
that only providing e-service does not automatically mean consumers
would use them. This paper shows the origins of our project and its
current position. We discuss the decision of using semantic web
technologies and their potential to improve e-services usage. We also
present current knowledge base and its real-world classification. In the paper, we discuss further work to be done in the project. Current
state of the project is promising.
Abstract: Interactive installations for public spaces are a
particular kind of interactive systems, the design of which has been
the subject of several research studies. Sensor-based applications are
becoming increasingly popular, but the human-computer interaction
community is still far from reaching sound, effective large-scale
interactive installations for public spaces. The 6DSpaces project is
described in this paper as a research approach based on studying the
role of multisensory interactivity and how it can be effectively used
to approach people to digital, scientific contents. The design of an
entire scientific exhibition is described and the result was evaluated
in the real world context of a Science Centre. Conclusions bring
insight into how the human-computer interaction should be designed
in order to maximize the overall experience.
Abstract: Optimization and control of reactive power
distribution in the power systems leads to the better operation of the
reactive power resources. Reactive power control reduces
considerably the power losses and effective loads and improves the
power factor of the power systems. Another important reason of the
reactive power control is improving the voltage profile of the power
system. In this paper, voltage and reactive power control using
Neural Network techniques have been applied to the 33 shines-
Tehran Electric Company. In this suggested ANN, the voltages of PQ
shines have been considered as the input of the ANN. Also, the
generators voltages, tap transformers and shunt compensators have
been considered as the output of ANN. Results of this techniques
have been compared with the Linear Programming. Minimization of
the transmission line power losses has been considered as the
objective function of the linear programming technique. The
comparison of the results of the ANN technique with the LP shows
that the ANN technique improves the precision and reduces the
computation time. ANN technique also has a simple structure and
this causes to use the operator experience.
Abstract: In this study, a network quality of service (QoS)
evaluation system was proposed. The system used a combination of
fuzzy C-means (FCM) and regression model to analyse and assess the
QoS in a simulated network. Network QoS parameters of multimedia
applications were intelligently analysed by FCM clustering
algorithm. The QoS parameters for each FCM cluster centre were
then inputted to a regression model in order to quantify the overall
QoS. The proposed QoS evaluation system provided valuable
information about the network-s QoS patterns and based on this
information, the overall network-s QoS was effectively quantified.
Abstract: Simulation is a very powerful method used for highperformance
and high-quality design in distributed system, and now
maybe the only one, considering the heterogeneity, complexity and
cost of distributed systems. In Grid environments, foe example, it is
hard and even impossible to perform scheduler performance
evaluation in a repeatable and controllable manner as resources and
users are distributed across multiple organizations with their own
policies. In addition, Grid test-beds are limited and creating an
adequately-sized test-bed is expensive and time consuming.
Scalability, reliability and fault-tolerance become important
requirements for distributed systems in order to support distributed
computation. A distributed system with such characteristics is called
dependable. Large environments, like Cloud, offer unique
advantages, such as low cost, dependability and satisfy QoS for all
users. Resource management in large environments address
performant scheduling algorithm guided by QoS constrains. This
paper presents the performance evaluation of scheduling heuristics
guided by different optimization criteria. The algorithms for
distributed scheduling are analyzed in order to satisfy users
constrains considering in the same time independent capabilities of
resources. This analysis acts like a profiling step for algorithm
calibration. The performance evaluation is based on simulation. The
simulator is MONARC, a powerful tool for large scale distributed
systems simulation. The novelty of this paper consists in synthetic
analysis results that offer guidelines for scheduler service
configuration and sustain the empirical-based decision. The results
could be used in decisions regarding optimizations to existing Grid
DAG Scheduling and for selecting the proper algorithm for DAG
scheduling in various actual situations.
Abstract: Design of an observer based controller for a class of
fractional order systems has been done. Fractional order mathematics
is used to express the system and the proposed observer. Fractional
order Lyapunov theorem is used to derive the closed-loop asymptotic
stability. The gains of the observer and observer based controller are
derived systematically using the linear matrix inequality approach.
Finally, the simulation results demonstrate validity and effectiveness
of the proposed observer based controller.
Abstract: This paper propose a new circuit design which
monitor total leakage current during standby mode and generates the
optimal reverse body bias voltage, by using the adaptive body bias
(ABB) technique to compensate die-to-die parameter variations.
Design details of power monitor are examined using simulation
framework in 65nm and 32nm BTPM model CMOS process.
Experimental results show the overhead of proposed circuit in terms
of its power consumption is about 10 μW for 32nm technology and
about 12 μW for 65nm technology at the same power supply voltage
as the core power supply. Moreover the results show that our
proposed circuit design is not far sensitive to the temperature
variations and also process variations. Besides, uses the simple
blocks which offer good sensitivity, high speed, the continuously
feedback loop.
Abstract: Numerical analysis naturally finds applications in all
fields of engineering and the physical sciences, but in the
21st century, the life sciences and even the arts have adopted
elements of scientific computations. The numerical data analysis
became key process in research and development of all the fields [6].
In this paper we have made an attempt to analyze the specified
numerical patterns with reference to the association rule mining
techniques with minimum confidence and minimum support mining
criteria. The extracted rules and analyzed results are graphically
demonstrated. Association rules are a simple but very useful form of
data mining that describe the probabilistic co-occurrence of certain
events within a database [7]. They were originally designed to
analyze market-basket data, in which the likelihood of items being
purchased together within the same transactions are analyzed.
Abstract: The scale, complexity and worldwide geographical
spread of the LHC computing and data analysis problems are
unprecedented in scientific research. The complexity of processing
and accessing this data is increased substantially by the size and
global span of the major experiments, combined with the limited
wide area network bandwidth available. We present the latest
generation of the MONARC (MOdels of Networked Analysis at
Regional Centers) simulation framework, as a design and modeling
tool for large scale distributed systems applied to HEP experiments.
We present simulation experiments designed to evaluate the
capabilities of the current real-world distributed infrastructure to
support existing physics analysis processes and the means by which
the experiments bands together to meet the technical challenges
posed by the storage, access and computing requirements of LHC
data analysis within the CMS experiment.
Abstract: Wide applicability of concurrent programming
practices in developing various software applications leads to
different concurrency errors amongst which data race is the most
important. Java provides greatest support for concurrent
programming by introducing various concurrency packages. Aspect
oriented programming (AOP) is modern programming paradigm
facilitating the runtime interception of events of interest and can be
effectively used to handle the concurrency problems. AspectJ being
an aspect oriented extension to java facilitates the application of
concepts of AOP for data race detection. Volatile variables are
usually considered thread safe, but they can become the possible
candidates of data races if non-atomic operations are performed
concurrently upon them. Various data race detection algorithms have
been proposed in the past but this issue of volatility and atomicity is
still unaddressed. The aim of this research is to propose some
suggestions for incorporating certain conditions for data race
detection in java programs at the volatile fields by taking into account
support for atomicity in java concurrency packages and making use
of pointcuts. Two simple test programs will demonstrate the results
of research. The results are verified on two different Java
Development Kits (JDKs) for the purpose of comparison.
Abstract: In this paper we consider a nonlinear feedback control
called augmented automatic choosing control (AACC) using the
gradient optimization automatic choosing functions for nonlinear
systems. Constant terms which arise from sectionwise linearization
of a given nonlinear system are treated as coefficients of a stable
zero dynamics. Parameters included in the control are suboptimally
selected by expanding a stable region in the sense of Lyapunov
with the aid of the genetic algorithm. This approach is applied to
a field excitation control problem of power system to demonstrate
the splendidness of the AACC. Simulation results show that the new
controller can improve performance remarkably well.
Abstract: This paper explores steady-state characteristics of
grid-connected doubly fed induction motor (DFIM) in case of unity
power factor operation. Based on the synchronized mathematical
model, analytic determination of the control laws is presented and
illustrated by various figures to understand the effect of the applied
rotor voltage on the speed and the active power. On other hand,
unlike previous works where the stator resistance was neglected, in
this work, stator resistance is included such that the equations can be
applied to small wind turbine generators which are becoming more
popular. Finally the work is crowned by integration of the studied
induction generator in a wind system where an open loop control is
proposed confers a remarkable simplicity of implementation
compared to the known methods.
Abstract: We propose a multi-agent based utilitarian approach
to model and understand information flows in social networks that
lead to Pareto optimal informational exchanges. We model the
individual expected utility function of the agents to reflect the net
value of information received. We show how this model, adapted
from a theorem by Karl Borch dealing with an actuarial Risk
Exchange concept in the Insurance industry, can be used for social
network analysis. We develop a utilitarian framework that allows us
to interpret Pareto optimal exchanges of value as potential
information flows, while achieving a maximization of a sum of
expected utilities of information of the group of agents. We examine
some interesting conditions on the utility function under which the
flows are optimal. We illustrate the promise of this new approach to
attach economic value to information in networks with a synthetic
example.
Abstract: Non-Destructive evaluation of in-service power
transformer condition is necessary for avoiding catastrophic failures.
Dissolved Gas Analysis (DGA) is one of the important methods.
Traditional, statistical and intelligent DGA approaches have been
adopted for accurate classification of incipient fault sources.
Unfortunately, there are not often enough faulty patterns required for
sufficient training of intelligent systems. By bootstrapping the
shortcoming is expected to be alleviated and algorithms with better
classification success rates to be obtained. In this paper the
performance of an artificial neural network, K-Nearest Neighbour
and support vector machine methods using bootstrapped data are
detailed and shown that while the success rate of the ANN algorithms
improves remarkably, the outcome of the others do not benefit so
much from the provided enlarged data space. For assessment, two
databases are employed: IEC TC10 and a dataset collected from
reported data in papers. High average test success rate well exhibits
the remarkable outcome.
Abstract: this paper gives a novel approach towards real-time speed estimation of multiple traffic vehicles using fuzzy logic and image processing techniques with proper arrangement of camera parameters. The described algorithm consists of several important steps. First, the background is estimated by computing median over time window of specific frames. Second, the foreground is extracted using fuzzy similarity approach (FSA) between estimated background pixels and the current frame pixels containing foreground and background. Third, the traffic lanes are divided into two parts for both direction vehicles for parallel processing. Finally, the speeds of vehicles are estimated by Maximum a Posterior Probability (MAP) estimator. True ground speed is determined by utilizing infrared sensors for three different vehicles and the results are compared to the proposed algorithm with an accuracy of ± 0.74 kmph.