Abstract: The proposed paper examines strategies whose aim is
to counter the all too often sighted process of abandonment that
characterizes contemporary cities. The city of Nicosia in Cyprus is
used as an indicative case study, whereby several recent projects are
presented as capitalizing on traditional cultural assets to revive the
downtown. The reuse of existing building stock as museums,
performing arts centers and theaters but also as in the form of various
housing typologies is geared to strengthen the ranks of local residents
and to spur economic growth. Unlike the examples from the 1960s,
the architecture of more recent adaptive reuse for urban regeneration
seems to be geared in reinforcing a connection to the city where the
buildings often reflect the characteristics of their urban context.
Abstract: In wireless sensor networks,the mobile agent technology is used in data fusion. According to the node residual energy and the results of partial integration,we design the node clustering algorithm. Optimization of mobile agent in the routing within the cluster strategy for wireless sensor networks to further reduce the amount of data transfer. Through the experiments, using mobile agents in the integration process within the cluster can be reduced the path loss in some extent.
Abstract: Smoothing or filtering of data is first preprocessing step
for noise suppression in many applications involving data analysis.
Moving average is the most popular method of smoothing the data,
generalization of this led to the development of Savitzky-Golay filter.
Many window smoothing methods were developed by convolving
the data with different window functions for different applications;
most widely used window functions are Gaussian or Kaiser. Function
approximation of the data by polynomial regression or Fourier
expansion or wavelet expansion also gives a smoothed data. Wavelets
also smooth the data to great extent by thresholding the wavelet
coefficients. Almost all smoothing methods destroys the peaks and
flatten them when the support of the window is increased. In certain
applications it is desirable to retain peaks while smoothing the data
as much as possible. In this paper we present a methodology called
as peak-wise smoothing that will smooth the data to any desired level
without losing the major peak features.
Abstract: The computer has become an essential tool in modern
life, and the combined use of a computer with a projector is very
common in teaching and presentations. However, as typical computer
operating devices involve a mouse or keyboard, when making
presentations, users often need to stay near the computer to execute
functions such as changing pages, writing, and drawing, thus, making
the operation time-consuming, and reducing interactions with the
audience. This paper proposes a laser pointer interaction system able
to simulate mouse functions in order that users need not remain near
the computer, but can directly use laser pointer operations from at a
distance. It can effectively reduce the users- time spent by the
computer, allowing for greater interactions with the audience.
Abstract: This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Abstract: This paper proposes a new parameter identification
method based on Linear Fractional Transformation (LFT). It is
assumed that the target linear system includes unknown parameters.
The parameter deviations are separated from a nominal system via
LFT, and identified by organizing I/O signals around the separated
deviations of the real system. The purpose of this paper is to apply LFT
to simultaneously identify the parameter deviations in systems with
fewer outputs than unknown parameters. As a fundamental example,
this method is implemented to one degree of freedom vibratory system.
Via LFT, all physical parameters were simultaneously identified in this
system. Then, numerical simulations were conducted for this system to
verify the results. This study shows that all the physical parameters of a
system with fewer outputs than unknown parameters can be effectively
identified simultaneously using LFT.
Abstract: Methods to detect and localize time singularities of polynomial and quasi-polynomial ordinary differential equations are systematically presented and developed. They are applied to examples taken form different fields of applications and they are also compared to better known methods such as those based on the existence of linear first integrals or Lyapunov functions.
Abstract: This work presents a multiple objective linear programming (MOLP) model based on the desirability function approach for solving the aggregate production planning (APP) decision problem upon Masud and Hwang-s model. The proposed model minimises total production costs, carrying or backordering costs and rates of change in labor levels. An industrial case demonstrates the feasibility of applying the proposed model to the APP problems with three scenarios of inventory levels. The proposed model yields an efficient compromise solution and the overall levels of DM satisfaction with the multiple combined response levels. There has been a trend to solve complex planning problems using various metaheuristics. Therefore, in this paper, the multi-objective APP problem is solved by hybrid metaheuristics of the hunting search (HuSIHSA) and firefly (FAIHSA) mechanisms on the improved harmony search algorithm. Results obtained from the solution of are then compared. It is observed that the FAIHSA can be used as a successful alternative solution mechanism for solving APP problems over three scenarios. Furthermore, the FAIHSA provides a systematic framework for facilitating the decision-making process, enabling a decision maker interactively to modify the desirability function approach and related model parameters until a good optimal solution is obtained with proper selection of control parameters when compared.
Abstract: This paper deals with the tuning of parameters for Automatic Generation Control (AGC). A two area interconnected hydrothermal system with PI controller is considered. Genetic Algorithm (GA) and Particle Swarm optimization (PSO) algorithms have been applied to optimize the controller parameters. Two objective functions namely Integral Square Error (ISE) and Integral of Time-multiplied Absolute value of the Error (ITAE) are considered for optimization. The effectiveness of an objective function is considered based on the variation in tie line power and change in frequency in both the areas. MATLAB/SIMULINK was used as a simulation tool. Simulation results reveal that ITAE is a better objective function than ISE. Performances of optimization algorithms are also compared and it was found that genetic algorithm gives better results than particle swarm optimization algorithm for the problems of AGC.
Abstract: The development of Internet technology in recent years has led to a more active role of users in creating Web content. This has significant effects both on individual learning and collaborative knowledge building. This paper will present an integrative framework model to describe and explain learning and knowledge building with shared digital artifacts on the basis of Luhmann-s systems theory and Piaget-s model of equilibration. In this model, knowledge progress is based on cognitive conflicts resulting from incongruities between an individual-s prior knowledge and the information which is contained in a digital artifact. Empirical support for the model will be provided by 1) applying it descriptively to texts from Wikipedia, 2) examining knowledge-building processes using a social network analysis, and 3) presenting a survey of a series of experimental laboratory studies.
Abstract: How to coordinate the behaviors of the agents through
learning is a challenging problem within multi-agent domains.
Because of its complexity, recent work has focused on how
coordinated strategies can be learned. Here we are interested in using
reinforcement learning techniques to learn the coordinated actions of a
group of agents, without requiring explicit communication among
them. However, traditional reinforcement learning methods are based
on the assumption that the environment can be modeled as Markov
Decision Process, which usually cannot be satisfied when multiple
agents coexist in the same environment. Moreover, to effectively
coordinate each agent-s behavior so as to achieve the goal, it-s
necessary to augment the state of each agent with the information
about other existing agents. Whereas, as the number of agents in a
multiagent environment increases, the state space of each agent grows
exponentially, which will cause the combinational explosion problem.
Profit sharing is one of the reinforcement learning methods that allow
agents to learn effective behaviors from their experiences even within
non-Markovian environments. In this paper, to remedy the drawback
of the original profit sharing approach that needs much memory to
store each state-action pair during the learning process, we firstly
address a kind of on-line rational profit sharing algorithm. Then, we
integrate the advantages of modular learning architecture with on-line
rational profit sharing algorithm, and propose a new modular
reinforcement learning model. The effectiveness of the technique is
demonstrated using the pursuit problem.
Abstract: Obtaining labeled data in supervised learning is often
difficult and expensive, and thus the trained learning algorithm tends
to be overfitting due to small number of training data. As a result,
some researchers have focused on using unlabeled data which may
not necessary to follow the same generative distribution as the labeled
data to construct a high-level feature for improving performance on
supervised learning tasks. In this paper, we investigate the impact of
the relationship between unlabeled and labeled data for classification
performance. Specifically, we will apply difference unlabeled data
which have different degrees of relation to the labeled data for
handwritten digit classification task based on MNIST dataset. Our
experimental results show that the higher the degree of relation
between unlabeled and labeled data, the better the classification
performance. Although the unlabeled data that is completely from
different generative distribution to the labeled data provides the lowest
classification performance, we still achieve high classification performance.
This leads to expanding the applicability of the supervised
learning algorithms using unsupervised learning.
Abstract: Higher-order Statistics (HOS), also known as
cumulants, cross moments and their frequency domain counterparts,
known as poly spectra have emerged as a powerful signal processing
tool for the synthesis and analysis of signals and systems. Algorithms
used for the computation of cross moments are computationally
intensive and require high computational speed for real-time
applications. For efficiency and high speed, it is often advantageous
to realize computation intensive algorithms in hardware. A promising
solution that combines high flexibility together with the speed of a
traditional hardware is Field Programmable Gate Array (FPGA). In
this paper, we present FPGA-based parallel architecture for the
computation of third-order cross moments. The proposed design is
coded in Very High Speed Integrated Circuit (VHSIC) Hardware
Description Language (VHDL) and functionally verified by
implementing it on Xilinx Spartan-3 XC3S2000FG900-4 FPGA.
Implementation results are presented and it shows that the proposed
design can operate at a maximum frequency of 86.618 MHz.
Abstract: Gene expression profiling is rapidly evolving into a
powerful technique for investigating tumor malignancies. The
researchers are overwhelmed with the microarray-based platforms
and methods that confer them the freedom to conduct large-scale
gene expression profiling measurements. Simultaneously,
investigations into cross-platform integration methods have started
gaining momentum due to their underlying potential to help
comprehend a myriad of broad biological issues in tumor diagnosis,
prognosis, and therapy. However, comparing results from different
platforms remains to be a challenging task as various inherent
technical differences exist between the microarray platforms. In this
paper, we explain a simple ratio-transformation method, which can
provide some common ground for cDNA and Affymetrix platform
towards cross-platform integration. The method is based on the
characteristic data attributes of Affymetrix- and cDNA- platform. In
the work, we considered seven childhood leukemia patients and their
gene expression levels in either platform. With a dataset of 822
differentially expressed genes from both these platforms, we carried
out a specific ratio-treatment to Affymetrix data, which subsequently
showed an improvement in the relationship with the cDNA data.
Abstract: It is well known that a linear dynamic system including
a delay will exhibit limit cycle oscillations when a bang-bang sensor
is used in the feedback loop of a PID controller. A similar behaviour
occurs when a delayed feedback signal is used to train a neural
network. This paper develops a method of predicting this behaviour
by linearizing the system, which can be shown to behave in a manner
similar to an integral controller. Using this procedure, it is possible
to predict the characteristics of the neural network driven limit cycle
to varying degrees of accuracy, depending on the information known
about the system. An application is also presented: the intelligent
control of a spark ignition engine.
Abstract: A dent is a gross distortion of the pipe cross-section.
Dent depth is defined as the maximum reduction in the diameter of
the pipe compared to the original diameter. Pipeline dent finite
element (FE) simulation and theoretical analysis are conducted in this
paper to develop an understanding of the geometric characteristics
and strain distribution in the pressurized dented pipe. Based on the
results, the magnitude of the denting force increases significantly
with increasing the internal pressure, and the maximum
circumferential and longitudinal strains increase by increasing the
internal pressure and the dent depth. The results can be used for
characterizing dents and ranking their risks to the integrity of a
pipeline.
Abstract: In this paper the main objective is to analyze the
quality of service of the bus companies operating in the city of
Campos, located in the state of Rio de Janeiro, Brazil. This analysis,
based on the opinion of the bus customers, will help to determine
their degree of satisfaction with the service provided by the bus
companies. The result of this assessment shows that the bus
customers are displeased with the quality of service supplied by the
bus companies. Therefore, it is necessary to identify alternative
solutions to minimize the consequences of the main problems related
to customers- dissatisfaction identified in our evaluation and to help
the bus companies operating in Campos better fulfill their riders-
needs.
Abstract: While compressing text files is useful, compressing
still image files is almost a necessity. A typical image takes up much
more storage than a typical text message and without compression
images would be extremely clumsy to store and distribute. The
amount of information required to store pictures on modern
computers is quite large in relation to the amount of bandwidth
commonly available to transmit them over the Internet and
applications. Image compression addresses the problem of reducing
the amount of data required to represent a digital image. Performance
of any image compression method can be evaluated by measuring the
root-mean-square-error & peak signal to noise ratio. The method of
image compression that will be analyzed in this paper is based on the
lossy JPEG image compression technique, the most popular
compression technique for color images. JPEG compression is able to
greatly reduce file size with minimal image degradation by throwing
away the least “important" information. In JPEG, both color
components are downsampled simultaneously, but in this paper we
will compare the results when the compression is done by
downsampling the single chroma part. In this paper we will
demonstrate more compression ratio is achieved when the
chrominance blue is downsampled as compared to downsampling the
chrominance red in JPEG compression. But the peak signal to noise
ratio is more when the chrominance red is downsampled as compared
to downsampling the chrominance blue in JPEG compression. In
particular we will use the hats.jpg as a demonstration of JPEG
compression using low pass filter and demonstrate that the image is
compressed with barely any visual differences with both methods.
Abstract: Demand of energy is increasing faster than the
generation. It leads shortage of power in all sectors of society. At
peak hours this shortage is higher. Unless we utilize energy efficient
technology, it is very difficult to minimize the shortage of energy. So
energy efficiency program and energy conservation has an important
role. Energy efficient technologies are cost intensive hence it is
always not possible to implement in country like India. In the recent
study, an educational building with operating hours from 10:00 a.m.
to 05:00 p.m. has been selected to quantify the possibility of lighting
energy conservation. As the operating hour is in daytime, integration
of daylight with artificial lighting system will definitely reduce the
lighting energy consumption. Moreover the initial investment has
been given priority and hence the existing lighting installation was
unaltered. An automatic controller has been designed which will be
operated as a function of daylight through windows and the lighting
system of the room will function accordingly. The result of the study
of integrating daylight gave quite satisfactory for visual comfort as
well as energy conservation.
Abstract: A system for market identification (SMI) is presented.
The resulting representations are multivariable dynamic demand
models. The market specifics are analyzed. Appropriate models and
identification techniques are chosen. Multivariate static and dynamic
models are used to represent the market behavior. The steps of the
first stage of SMI, named data preprocessing, are mentioned. Next,
the second stage, which is the model estimation, is considered in more
details. Stepwise linear regression (SWR) is used to determine the
significant cross-effects and the orders of the model polynomials. The
estimates of the model parameters are obtained by a numerically stable
estimator. Real market data is used to analyze SMI performance.
The main conclusion is related to the applicability of multivariate
dynamic models for representation of market systems.