Abstract: Adaptive echo cancellers with two-path algorithm are
applied to avoid the false adaptation during the double-talk situation.
In the two-path algorithm, several transfer logic solutions have been
proposed to control the filter update. This paper presents an improved
transfer logic solution. It improves the convergence speed of the
two-path algorithm, and allows the reduction of the memory elements
and computational complexity. Results of simulations show the
improved performance of the proposed solution.
Abstract: When binary decision diagrams are formed from
uniformly distributed Monte Carlo data for a large number of
variables, the complexity of the decision diagrams exhibits a
predictable relationship to the number of variables and minterms. In
the present work, a neural network model has been used to analyze the
pattern of shortest path length for larger number of Monte Carlo data
points. The neural model shows a strong descriptive power for the
ISCAS benchmark data with an RMS error of 0.102 for the shortest
path length complexity. Therefore, the model can be considered as a
method of predicting path length complexities; this is expected to lead
to minimum time complexity of very large-scale integrated circuitries
and related computer-aided design tools that use binary decision
diagrams.
Abstract: The human friendly interaction is the key function of a human-centered system. Over the years, it has received much attention to develop the convenient interaction through intention recognition. Intention recognition processes multimodal inputs including speech, face images, and body gestures. In this paper, we suggest a novel approach of intention recognition using a graph representation called Intention Graph. A concept of valid intention is proposed, as a target of intention recognition. Our approach has two phases: goal recognition phase and intention recognition phase. In the goal recognition phase, we generate an action graph based on the observed actions, and then the candidate goals and their plans are recognized. In the intention recognition phase, the intention is recognized with relevant goals and user profile. We show that the algorithm has polynomial time complexity. The intention graph is applied to a simple briefcase domain to test our model.
Abstract: Human perceives color in categories, which may be
identified using color name such as red, blue, etc. The categorization
is unique for each human being. However despite the individual
differences, the categorization is shared among members in society.
This allows communication among them, especially when using
color name. Sociable robot, to live coexist with human and become
part of human society, must also have the shared color
categorization, which can be achieved through learning. Many
works have been done to enable computer, as brain of robot, to learn
color categorization. Most of them rely on modeling of human color
perception and mathematical complexities. Differently, in this work,
the computer learns color categorization through interaction with
humans. This work aims at developing the innate ability of the
computer to learn the human-like color categorization. It focuses on
the representation of color categorization and how it is built and
developed without much mathematical complexity.
Abstract: The H.264/AVC video coding standard contains a number of advanced features. Ones of the new features introduced in this standard is the multiple intramode prediction. Its function exploits directional spatial correlation with adjacent block for intra prediction. With this new features, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standard, but computational complexity is increased significantly when brut force rate distortion optimization (RDO) algorithm is used. In this paper, we propose a new fast intra prediction mode decision method for the complexity reduction of H.264 video coding. for luma intra prediction, the proposed method consists of two step: in the first step, we make the RDO for four mode of intra 4x4 block, based the distribution of RDO cost of those modes and the idea that the fort correlation with adjacent mode, we select the best mode of intra 4x4 block. In the second step, we based the fact that the dominating direction of a smaller block is similar to that of bigger block, the candidate modes of 8x8 blocks and 16x16 macroblocks are determined. So, in case of chroma intra prediction, the variance of the chroma pixel values is much smaller than that of luma ones, since our proposed uses only the mode DC. Experimental results show that the new fast intra mode decision algorithm increases the speed of intra coding significantly with negligible loss of PSNR.
Abstract: Process measurement is the task of empirically and objectively assigning numbers to the properties of business processes in such a way as to describe them. Desirable attributes to study and measure include complexity, cost, maintainability, and reliability. In our work we will focus on investigating process complexity. We define process complexity as the degree to which a business process is difficult to analyze, understand or explain. One way to analyze a process- complexity is to use a process control-flow complexity measure. In this paper, an attempt has been made to evaluate the control-flow complexity measure in terms of Weyuker-s properties. Weyuker-s properties must be satisfied by any complexity measure to qualify as a good and comprehensive one.
Abstract: Duplicated region detection is a technical method to
expose copy-paste forgeries on digital images. Copy-paste is one
of the common types of forgeries to clone portion of an image
in order to conceal or duplicate special object. In this type of
forgery detection, extracting robust block feature and also high
time complexity of matching step are two main open problems.
This paper concentrates on computational time and proposes a local
block matching algorithm based on block clustering to enhance time
complexity. Time complexity of the proposed algorithm is formulated
and effects of two parameter, block size and number of cluster, on
efficiency of this algorithm are considered. The experimental results
and mathematical analysis demonstrate this algorithm is more costeffective
than lexicographically algorithms in time complexity issue
when the image is complex.
Abstract: In this paper, we propose a new architecture for the implementation of the N-point Fast Fourier Transform (FFT), based on the Radix-2 Decimation in Frequency algorithm. This architecture is based on a pipeline circuit that can process a stream of samples and produce two FFT transform samples every clock cycle. Compared to existing implementations the architecture proposed achieves double processing speed using the same circuit complexity.
Abstract: Model mapping and transformation are important processes in high level system abstractions, and form the cornerstone of model-driven architecture (MDA) techniques. Considerable research in this field has devoted attention to static system abstraction, despite the fact that most systems are dynamic with high frequency changes in behavior. In this paper we provide an overview of work that has been done with regard to behavior model mapping and transformation, based on: (1) the completeness of the platform independent model (PIM); (2) semantics of behavioral models; (3) languages supporting behavior model transformation processes; and (4) an evaluation of model composition to effect the best approach to describing large systems with high complexity.
Abstract: In order to meet the limits imposed on automotive
emissions, engine control systems are required to constrain air/fuel
ratio (AFR) in a narrow band around the stoichiometric value, due to
the strong decay of catalyst efficiency in case of rich or lean mixture.
This paper presents a model of a sample spark ignition engine and
demonstrates Simulink-s capabilities to model an internal combustion
engine from the throttle to the crankshaft output. We used welldefined
physical principles supplemented, where appropriate, with
empirical relationships that describe the system-s dynamic behavior
without introducing unnecessary complexity. We also presents a PID
tuning method that uses an adaptive fuzzy system to model the
relationship between the controller gains and the target output
response, with the response specification set by desired percent
overshoot and settling time. The adaptive fuzzy based input-output
model is then used to tune on-line the PID gains for different
response specifications. Experimental results demonstrate that better
performance can be achieved with adaptive fuzzy tuning relative to
similar alternative control strategies. The actual response
specifications with adaptive fuzzy matched the desired response
specifications.
Abstract: We investigate efficient spreading codes for transmitter based techniques of code division multiple access (CDMA) systems. The channel is considered to be known at the transmitter which is usual in a time division duplex (TDD) system where the channel is assumed to be the same on uplink and downlink. For such a TDD/CDMA system, both bitwise and blockwise multiuser transmission schemes are taken up where complexity is transferred to the transmitter side so that the receiver has minimum complexity. Different spreading codes are considered at the transmitter to spread the signal efficiently over the entire spectrum. The bit error rate (BER) curves portray the efficiency of the codes in presence of multiple access interference (MAI) as well as inter symbol interference (ISI).
Abstract: In diversity rich environments, such as in Ultra-
Wideband (UWB) applications, the a priori determination of the
number of strong diversity branches is difficult, because of the considerably large number of diversity paths, which are characterized
by a variety of power delay profiles (PDPs). Several
Rake implementations have been proposed in the past, in order to reduce the number of the estimated and combined paths. To this
aim, we introduce two adaptive Rake receivers, which combine
a subset of the resolvable paths considering simultaneously the
quality of both the total combining output signal-to-noise ratio (SNR) and the individual SNR of each path. These schemes achieve
better adaptation to channel conditions compared to other known receivers, without further increasing the complexity. Their performance
is evaluated in different practical UWB channels, whose models are based on extensive propagation measurements. The
proposed receivers compromise between the power consumption,
complexity and performance gain for the additional paths, resulting in important savings in power and computational resources.
Abstract: We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.
Abstract: One of the most basic functions of control engineers is
tuning of controllers. There are always several process loops in the
plant necessitate of tuning. The auto tuned Proportional Integral
Derivative (PID) Controllers are designed for applications where
large load changes are expected or the need for extreme accuracy and
fast response time exists. The algorithm presented in this paper is
used for the tuning PID controller to obtain its parameters with a
minimum computing complexity. It requires continuous analysis of
variation in few parameters, and let the program to do the plant test
and calculate the controller parameters to adjust and optimize the
variables for the best performance. The algorithm developed needs
less time as compared to a normal step response test for continuous
tuning of the PID through gain scheduling.
Abstract: In this work, we present a novel active learning approach
for learning a visual object detection system. Our system
is composed of an active learning mechanism as wrapper around
a sub-algorithm which implement an online boosting-based learning
object detector. In the core is a combination of a bootstrap procedure
and a semi automatic learning process based on the online boosting
procedure. The idea is to exploit the availability of classifier during
learning to automatically label training samples and increasingly
improves the classifier. This addresses the issue of reducing labeling
effort meanwhile obtain better performance. In addition, we propose
a verification process for further improvement of the classifier.
The idea is to allow re-update on seen data during learning for
stabilizing the detector. The main contribution of this empirical study
is a demonstration that active learning based on an online boosting
approach trained in this manner can achieve results comparable or
even outperform a framework trained in conventional manner using
much more labeling effort. Empirical experiments on challenging data
set for specific object deteciton problems show the effectiveness of
our approach.
Abstract: In this paper a novel approach for generalized image
retrieval based on semantic contents is presented. A combination of
three feature extraction methods namely color, texture, and edge
histogram descriptor. There is a provision to add new features in
future for better retrieval efficiency. Any combination of these
methods, which is more appropriate for the application, can be used
for retrieval. This is provided through User Interface (UI) in the
form of relevance feedback. The image properties analyzed in this
work are by using computer vision and image processing algorithms.
For color the histogram of images are computed, for texture cooccurrence
matrix based entropy, energy, etc, are calculated and for
edge density it is Edge Histogram Descriptor (EHD) that is found.
For retrieval of images, a novel idea is developed based on greedy
strategy to reduce the computational complexity. The entire system
was developed using AForge.Imaging (an open source product),
MATLAB .NET Builder, C#, and Oracle 10g. The system was tested
with Coral Image database containing 1000 natural images and
achieved better results.
Abstract: This paper describes the NEAR (Navigating Exhibitions, Annotations and Resources) panel, a novel interactive visualization technique designed to help people navigate and interpret groups of resources, exhibitions and annotations by revealing hidden relations such as similarities and references. NEAR is implemented on A•VI•RE, an extended online information repository. A•VI•RE supports a semi-structured collection of exhibitions containing various resources and annotations. Users are encouraged to contribute, share, annotate and interpret resources in the system by building their own exhibitions and annotations. However, it is hard to navigate smoothly and efficiently in A•VI•RE because of its high capacity and complexity. We present a visual panel that implements new navigation and communication approaches that support discovery of implied relations. By quickly scanning and interacting with NEAR, users can see not only implied relations but also potential connections among different data elements. NEAR was tested by several users in the A•VI•RE system and shown to be a supportive navigation tool. In the paper, we further analyze the design, report the evaluation and consider its usage in other applications.
Abstract: The frequency contents of the non-stationary
signals vary with time. For proper characterization of such
signals, a smart time-frequency representation is necessary.
Classically, the STFT (short-time Fourier transform) is
employed for this purpose. Its limitation is the fixed timefrequency
resolution. To overcome this drawback an enhanced
STFT version is devised. It is based on the signal driven
sampling scheme, which is named as the cross-level sampling.
It can adapt the sampling frequency and the window function
(length plus shape) by following the input signal local
variations. This adaptation results into the proposed technique
appealing features, which are the adaptive time-frequency
resolution and the computational efficiency.
Abstract: Software maintenance and mainly software
comprehension pose the largest costs in the software lifecycle. In
order to assess the cost of software comprehension, various
complexity measures have been proposed in the literature. This paper
proposes new cognitive-spatial complexity measures, which combine
the impact of spatial as well as architectural aspect of the software to
compute the software complexity. The spatial aspect of the software
complexity is taken into account using the lexical distances (in
number of lines of code) between different program elements and the
architectural aspect of the software complexity is taken into
consideration using the cognitive weights of control structures
present in control flow of the program. The proposed measures are
evaluated using standard axiomatic frameworks and then, the
proposed measures are compared with the corresponding existing
cognitive complexity measures as well as the spatial complexity
measures for object-oriented software. This study establishes that the
proposed measures are better indicators of the cognitive effort
required for software comprehension than the other existing
complexity measures for object-oriented software.
Abstract: Atmospheric stability plays the most important role in
the transport and dispersion of air pollutants. Different methods are
used for stability determination with varying degrees of complexity.
Most of these methods are based on the relative magnitude of
convective and mechanical turbulence in atmospheric motions.
Richardson number, Monin-Obukhov length, Pasquill-Gifford
stability classification and Pasquill–Turner stability classification, are
the most common parameters and methods. The Pasquill–Turner
Method (PTM), which is employed in this study, makes use of
observations of wind speed, insolation and the time of day to classify
atmospheric stability with distinguishable indices. In this study, a
model is presented to determination of atmospheric stability
conditions using PTM. As a case study, meteorological data of
Mehrabad station in Tehran from 2000 to 2005 is applied to model.
Here, three different categories are considered to deduce the pattern
of stability conditions. First, the total pattern of stability classification
is obtained and results show that atmosphere is 38.77%, 27.26%,
33.97%, at stable, neutral and unstable condition, respectively. It is
also observed that days are mostly unstable (66.50%) while nights are
mostly stable (72.55%). Second, monthly and seasonal patterns are
derived and results indicate that relative frequency of stable
conditions decrease during January to June and increase during June
to December, while results for unstable conditions are exactly in
opposite manner. Autumn is the most stable season with relative
frequency of 50.69% for stable condition, whilst, it is 42.79%,
34.38% and 27.08% for winter, summer and spring, respectively.
Hourly stability pattern is the third category that points out that
unstable condition is dominant from approximately 03-15 GTM and
04-12 GTM for warm and cold seasons, respectively. Finally,
correlation between atmospheric stability and CO concentration is
achieved.