Abstract: Deoxyribonucleic Acid or DNA computing has
emerged as an interdisciplinary field that draws together chemistry,
molecular biology, computer science and mathematics. Thus, in this
paper, the possibility of DNA-based computing to solve an absolute
1-center problem by molecular manipulations is presented. This is
truly the first attempt to solve such a problem by DNA-based
computing approach. Since, part of the procedures involve with
shortest path computation, research works on DNA computing for
shortest path Traveling Salesman Problem, in short, TSP are reviewed.
These approaches are studied and only the appropriate one is adapted
in designing the computation procedures. This DNA-based
computation is designed in such a way that every path is encoded by
oligonucleotides and the path-s length is directly proportional to the
length of oligonucleotides. Using these properties, gel electrophoresis
is performed in order to separate the respective DNA molecules
according to their length. One expectation arise from this paper is that
it is possible to verify the instance absolute 1-center problem using
DNA computing by laboratory experiments.
Abstract: This paper aims to present a framework for the
organizational knowledge management, which seeks to deploy a
standardized structure for the integrated management of knowledge is
a common language based on domains, processes and global
indicators inspired by the COBIT framework 5 (ISACA, 2012),
which supports the integration of three technologies, enterprise
information architecture (EIA), the business process modeling (BPM)
and service-oriented architecture (SOA). The Gomak Framework is a
management platform that seeks to integrate the information
technology infrastructure, the structure of applications, information
infrastructure, and business logic and business model to support a
sound strategy of organizational knowledge management, low
process-based approach and concurrent engineering. Concurrent
engineering (CE) is a systematic approach to integrated product
development that respond to customer expectations, involving all
perspectives in parallel, from the beginning of the product life cycle.
(European Space Agency, 2000).
Abstract: In this work we present a solution for DAGC (Digital
Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4
GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used
enables gain control over Low Noise Amplifier (LNA) and a
Variable Gain Amplifier (VGA). The control over those signals is
performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better
signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the
average power of the baseband signal close to the desired set point.
DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and
actual gain setting, adjusting a gain factor of the accumulation, and
applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying
the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the
DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.
Abstract: In this paper, a class of impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms is formulated and investigated. By establishing a delay differential inequality and M-matrix theory, some sufficient conditions ensuring the existence, uniqueness and global exponential stability of equilibrium point for impulsive BAM fuzzy cellular neural networks with time delays in the leakage terms are obtained. In particular, a precise estimate of the exponential convergence rate is also provided, which depends on system parameters and impulsive perturbation intention. It is believed that these results are significant and useful for the design and applications of BAM fuzzy cellular neural networks. An example is given to show the effectiveness of the results obtained here.
Abstract: A serious problem on the WWW is finding reliable
information. Not everything found on the Web is true and the
Semantic Web does not change that in any way. The problem will be
even more crucial for the Semantic Web, where agents will be
integrating and using information from multiple sources. Thus, if an
incorrect premise is used due to a single faulty source, then any
conclusions drawn may be in error. Thus, statements published on
the Semantic Web have to be seen as claims rather than as facts, and
there should be a way to decide which among many possibly
inconsistent sources is most reliable. In this work, we propose a trust
model for the Semantic Web. The proposed model is inspired by the
use trust in human society. Trust is a type of social knowledge and
encodes evaluations about which agents can be taken as reliable
sources of information or services. Our proposed model allows
agents to decide which among different sources of information to
trust and thus act rationally on the semantic web.
Abstract: Super-resolution is nowadays used for a high-resolution
image produced from several low-resolution noisy frames. In
this work, we consider the problem of high-quality interpolation of a
single noise-free image. Such images may come from different sources,
i.e., they may be frames of videos, individual pictures, etc. On
the other hand, in the encoder we apply a downsampling via
bidimen-sional interpolation of each frame, and in the decoder we
apply a upsampling by which we restore the original size of the
image. If the compression ratio is very high, then we use a
convolutive mask that restores the edges, eliminating the blur.
Finally, both, the encoder and the complete decoder are implemented
on General-Purpose computation on Graphics Processing Units
(GPGPU) cards. In fact, the mentioned mask is coded inside texture
memory of a GPGPU.
Abstract: Speech enhancement is the process of eliminating
noise and increasing the quality of a speech signal, which is
contaminated with other kinds of distortions. This paper is on
developing an optimum cascaded system for speech enhancement.
This aim is attained without diminishing any relevant speech
information and without much computational and time complexity.
LMS algorithm, Spectral Subtraction and Kalman filter have been
deployed as the main de-noising algorithms in this work. Since these
algorithms suffer from respective shortcomings, this work has been
undertaken to design cascaded systems in different combinations and
the evaluation of such cascades by qualitative (listening) and
quantitative (SNR) tests.
Abstract: An image compression method has been developed
using fuzzy edge image utilizing the basic Block Truncation Coding
(BTC) algorithm. The fuzzy edge image has been validated with
classical edge detectors on the basis of the results of the well-known
Canny edge detector prior to applying to the proposed method. The
bit plane generated by the conventional BTC method is replaced with
the fuzzy bit plane generated by the logical OR operation between
the fuzzy edge image and the corresponding conventional BTC bit
plane. The input image is encoded with the block mean and standard
deviation and the fuzzy bit plane. The proposed method has been
tested with test images of 8 bits/pixel and size 512×512 and found to
be superior with better Peak Signal to Noise Ratio (PSNR) when
compared to the conventional BTC, and adaptive bit plane selection
BTC (ABTC) methods. The raggedness and jagged appearance, and
the ringing artifacts at sharp edges are greatly reduced in
reconstructed images by the proposed method with the fuzzy bit
plane.
Abstract: Most of the Question Answering systems
composed of three main modules: question processing,
document processing and answer processing. Question
processing module plays an important role in QA systems. If
this module doesn't work properly, it will make problems for
other sections. Moreover answer processing module is an
emerging topic in Question Answering, where these systems
are often required to rank and validate candidate answers.
These techniques aiming at finding short and precise answers
are often based on the semantic classification.
This paper discussed about a new model for question
answering which improved two main modules, question
processing and answer processing.
There are two important components which are the bases
of the question processing. First component is question
classification that specifies types of question and answer.
Second one is reformulation which converts the user's
question into an understandable question by QA system in a
specific domain. Answer processing module, consists of
candidate answer filtering, candidate answer ordering
components and also it has a validation section for interacting
with user. This module makes it more suitable to find exact
answer. In this paper we have described question and answer
processing modules with modeling, implementing and
evaluating the system. System implemented in two versions.
Results show that 'Version No.1' gave correct answer to 70%
of questions (30 correct answers to 50 asked questions) and
'version No.2' gave correct answers to 94% of questions (47
correct answers to 50 asked questions).
Abstract: Analysis of heart rate variability (HRV) has become a
popular non-invasive tool for assessing the activities of autonomic
nervous system. Most of the methods were hired from techniques
used for time series analysis. Currently used methods are time
domain, frequency domain, geometrical and fractal methods. A new
technique, which searches for pattern repeatability in a time series, is
proposed for quantifying heart rate (HR) time series. These set of
indices, which are termed as pattern repeatability measure and
pattern repeatability ratio are able to distinguish HR data clearly
from noise and electroencephalogram (EEG). The results of analysis
using these measures give an insight into the fundamental difference
between the composition of HR time series with respect to EEG and
noise.
Abstract: In this paper we introduce an ultra low power CMOS
LC oscillator and analyze a method to design a low power low phase
noise complementary CMOS LC oscillator. A 1.8GHz oscillator is
designed based on this analysis. The circuit has power supply equal
to 1.1 V and dissipates 0.17 mW power. The oscillator is also
optimized for low phase noise behavior. The oscillator phase noise is
-126.2 dBc/Hz and -144.4 dBc/Hz at 1 MHz and 8 MHz offset
respectively.
Abstract: The structural interpretation of a part of eastern Potwar
(Missa Keswal) has been carried out with available seismological,
seismic and well data. Seismological data contains both the source
parameters and fault plane solution (FPS) parameters and seismic data
contains ten seismic lines that were re-interpreted by using well data.
Structural interpretation depicts two broad types of fault sets namely,
thrust and back thrust faults. These faults together give rise to pop up
structures in the study area and also responsible for many structural
traps and seismicity. Seismic interpretation includes time and depth
contour maps of Chorgali Formation while seismological interpretation
includes focal mechanism solution (FMS), depth, frequency,
magnitude bar graphs and renewal of Seismotectonic map. The Focal
Mechanism Solutions (FMS) that surrounds the study area are
correlated with the different geological and structural maps of the area
for the determination of the nature of subsurface faults. Results of
structural interpretation from both seismic and seismological data
show good correlation. It is hoped that the present work will help in
better understanding of the variations in the subsurface structure and
can be a useful tool for earthquake prediction, planning of oil field and
reservoir monitoring.
Abstract: The importance of supply chain and logistics
management has been widely recognised. Effective management of
the supply chain can reduce costs and lead times and improve
responsiveness to changing customer demands. This paper proposes a
multi-matrix real-coded Generic Algorithm (MRGA) based
optimisation tool that minimises total costs associated within supply
chain logistics. According to finite capacity constraints of all parties
within the chain, Genetic Algorithm (GA) often produces infeasible
chromosomes during initialisation and evolution processes. In the
proposed algorithm, chromosome initialisation procedure, crossover
and mutation operations that always guarantee feasible solutions
were embedded. The proposed algorithm was tested using three sizes
of benchmarking dataset of logistic chain network, which are typical
of those faced by most global manufacturing companies. A half
fractional factorial design was carried out to investigate the influence
of alternative crossover and mutation operators by varying GA
parameters. The analysis of experimental results suggested that the
quality of solutions obtained is sensitive to the ways in which the
genetic parameters and operators are set.
Abstract: The paper describes a knowledge based system for
analysis of microscopic wear particles. Wear particles contained in
lubricating oil carry important information concerning machine
condition, in particular the state of wear. Experts (Tribologists) in the
field extract this information to monitor the operation of the machine
and ensure safety, efficiency, quality, productivity, and economy of
operation. This procedure is not always objective and it can also be
expensive. The aim is to classify these particles according to their
morphological attributes of size, shape, edge detail, thickness ratio,
color, and texture, and by using this classification thereby predict
wear failure modes in engines and other machinery. The attribute
knowledge links human expertise to the devised Knowledge Based
Wear Particle Analysis System (KBWPAS). The system provides an
automated and systematic approach to wear particle identification
which is linked directly to wear processes and modes that occur in
machinery. This brings consistency in wear judgment prediction
which leads to standardization and also less dependence on
Tribologists.
Abstract: Due to the growing dynamic and complexity within
the market environment production enterprises in particular are faced
with new logistic challenges. Moreover, it is here in this dynamic
environment that the Logistic Operating Curve Theory also reaches
its limits as a method for describing the correlations between the
logistic objectives. In order to convert this theory into a method for
dynamically monitoring productions this paper will introduce
methods for reliably and quickly identifying structural changes
relevant to logistics.
Abstract: This paper presents the novel Rao-Blackwellised
particle filter (RBPF) for mobile robot simultaneous localization and
mapping (SLAM) using monocular vision. The particle filter is
combined with unscented Kalman filter (UKF) to extending the path
posterior by sampling new poses that integrate the current observation
which drastically reduces the uncertainty about the robot pose. The
landmark position estimation and update is also implemented through
UKF. Furthermore, the number of resampling steps is determined
adaptively, which seriously reduces the particle depletion problem,
and introducing the evolution strategies (ES) for avoiding particle
impoverishment. The 3D natural point landmarks are structured with
matching Scale Invariant Feature Transform (SIFT) feature pairs. The
matching for multi-dimension SIFT features is implemented with a
KD-Tree in the time cost of O(log2
N). Experiment results on real robot
in our indoor environment show the advantages of our methods over
previous approaches.
Abstract: Deep Brain Stimulation or DBS is a surgical treatment for Parkinson-s Disease with three stimulation parameters: frequency, pulse width, and voltage. The parameters should be selected appropriately to achieve effective treatment. This selection now, performs clinically. The aim of this research is to study chaotic behavior of recorded tremor of patients under DBS in order to present a computational method to recognize stimulation optimum voltage. We obtained some chaotic features of tremor signal, and discovered embedding space of it has an attractor, and its largest Lyapunov exponent is positive, which show tremor signal has chaotic behavior, also we found out, in optimal voltage, entropy and embedding space variance of tremor signal have minimum values in comparison with other voltages. These differences can help neurologists recognize optimal voltage numerically, which leads to reduce patients' role and discomfort in optimizing stimulation parameters and to do treatment with high accuracy.
Abstract: In this paper a new concept named Intuitionistic Fuzzy
Multiset is introduced. The basic operations on Intuitionistic Fuzzy
Multisets such as union, intersection, addition, multiplication etc. are
discussed. An application of Intuitionistic Fuzzy Multiset in Medical diagnosis problem using a distance function is discussed in detail.
Abstract: The purpose of this paper is to examine the financing
practices of SMEs in Libya in two different phases of business life
cycle: start-up and matured stages. Moreover, SMEs- accessing bank
loan issues is also identified. The study was conducted by taking into
account the aspect of demand. The findings are based on a sample of
76 SMEs in Libya through the adoption of questionnaires. The results
have pinpointed several things- evidently, SMEs use informal
financing sources which prefer personal savings; SME owners are
willing to apply for bank loan, that the most pressing problem has
been identified, not to apply bank loan is loan with interest (religion
factor).
Abstract: In this paper is being described a possible use of
virtualization technology in teaching computer networks. The
virtualization can be used as a suitable tool for creating virtual
network laboratories, supplementing the real laboratories and
network simulation software in teaching networking concepts. It will
be given a short description of characteristic projects in the area of
virtualization technology usage in network simulation, network
experiments and engineering education. A method for implementing
laboratory has also been explained, together with possible laboratory
usage and design of laboratory exercises. At the end, the laboratory
test results of virtual laboratory are presented as well.